Episode 12: Generative AI in L&D and
the evolution of eLearning

Duration: 34min

About our guest:

Stella Lee

Dr. Stella Lee has over 20 years of progressive experience internationally in consulting digital learning initiatives with higher education, government, NGOs, and the private sectors.

Today her focus is on enterprise-wide learning strategy and governance, digital ethics for learning, Artificial Intelligence (AI) and e-learning applications, learning management system (LMS) design, evaluation and learning analytics.

Share episode

What does the surge of generative AI mean for the future of L&D? And how will the roles of L&D pros be reimagined in response?

Season two of ‘Keep it Simple’ kicks off with Director of Paradox Learning, AI strategist, and eLearning expert, Stella Lee. Together with Lee, we’ll unpack why picking the right AI tools is like shopping for a car and reveal the dos and don’ts of upskilling an AI-literate workforce. We’ll also give you the lowdown on why everyone in an organization should have a stake when it comes to adopting a beneficial and ethical AI policy.

Key takeaways:

  • AI tools must be integrated into every step of the ADDIE model (Analysis, Design, Development, Implementation, and Evaluation.) These tools are exceptional for analyzing learner data, ideation in design, developing content, and even evaluating learning outcomes.

  • Critical thinking and human oversight aren’t a good-to-have but a must-have when using AI in learning. AI outputs are indeed useful, but need human curation and validation to ensure they’re accurate. L&D experts should be always aware of how unpredictable AI-generated outputs are. Meaning, they must continuously evaluate the credibility of AI content.

  • Regarding the future of AI, AI literacy must become a top priority within organizations. Stella Lee proposes an AI literacy framework that includes understanding AI essentials, data fluency, critical thinking, diverse use cases, ethics, and future work implications. By testing current knowledge and tailoring AI training to different teams, L&D pros can ensure the effective and appropriate uses of AI in learning and development.

  • There are ethical implications regarding AI in L&D. These include data privacy, bias, plus environmental impact. When organizations have to discuss ethics and build policies, diverse perspectives should be available. This ensures that AI adoption aligns with the company culture. Plus, creating comprehensive ethical guidelines is necessary for responsible AI application in L&D.

Want more resources on this topic?

Preparing for an AI-driven future of work

Read more

Skills for success in an AI-driven future

Read more

AI LMS: How to pick the best one

Read more

More episodes we think you’ll love

Karl Kapp Interactive Learning

October 25, 2023 • 30 min.

Learning & Development

Power Up Training with Interactive Content Design

How can we make sure employees are active and engaged during training? It’s time to elevate our learning game. We talk with Karl Kapp, Gamification and Instructional Design expert, professor at Bloomsburg University, and TEDx speaker. Karl reveals the secrets behind crafting effective, meaningful, and interactive content design for businesses big and small.

December 6, 2023 • 40 min.

Organizational Design | People Management | Learning & Development

Preparing for the AI-powered workplace

What’s on the horizon beyond ChatGPT? How can you prepare for AI-led digital transformation? We sit down with Ronald Ashri to discuss everything from employee privacy to rethinking how teams are trained and emerging new AI-related roles (prompt engineers?!). Join us as we skip the speculation and get stuck into the concrete considerations organizations should take into account when adopting new AI tools.

Dr Andre Martin sits on a blue background, with the Keep it Simple logo in the top left corner along with the name of the episode "Company Culture". In the bottom right corner is the TalentLMS logo.

January 31, 2024 • 38 min.

People Management | Learning & Development

Defining & Decoding Company Culture

What is company culture? Is it ping-pong tables and perks? Is it the values we talk about internally? Or is it the way the company works “on a random Tuesday in October”? With experience at Disney, Nike, and Google, Talent Exec, Andre Martin joins the discussion. Join us and find out why authenticity and intention lie at the heart of a consistent culture.

Back to podcast main page

Never miss an episode! Get every new drop right in your inbox

By clicking the Subscribe button, you accept and consent to receive the type of content mentioned above. Please review the TalentLMS Privacy Policy for further information.

Full Episode Transcript

[00:00:00] Host: Welcome to Keep It Simple, a podcast where we’re challenging business and leadership experts to cut through the noise of the corporate world and get to the bottom of what makes the workplace actually work. On this season, we’ll be talking about everything from the importance of reskilling in the age of [00:00:30] AI to tips on managing a multi generational workforce.

[00:00:34] And what’s on the horizon for learning and development professionals. I’m your host, Mina Vogia.

[00:00:44] Keep it simple is brought to you by TalentLMS, the training platform built for success. And designed with simplicity in mind. Together, let’s uncomplicate what makes a winning workplace. You can find out more at talentlms.com.[00:01:00]   

[00:01:03] On today’s episode:

[00:01:03] Stella Lee: So you want to start from what does your business need? Is this creating new opportunities for you, by implementing an AI tool? Is this addressing specific pain points that you’re experiencing? Is it solving problems for you?

[00:01:20] Host: To kick off this season of Keep it simple, we’re exploring the AI enabled workplace of tomorrow and asking how the landscape of learning and development is keeping up with the [00:01:30] rapid changes of technology.

[00:01:31] Joining me is Stella Li, AI strategist, e-learning expert, and director of the consulting firm Paradox Learning. Together, we’ll break down the must haves of a successful AI policy, discuss what it actually means to be AI literate and shed some light on the human skills that will set talent apart in a blended workforce. Stay with us[00:01:55] 

[00:02:03] So Stella, thank you so much for joining us today. It’s so great to have you. I would love to talk to you about the advancements of AI. And how they’ve really, especially in the past year, have happened at such a breakneck pace. And there have been studies that are trying to predict the impact of the workplace in general.

[00:02:24] A recent one even showing that generative AI is set to absorb 60 to 70 [00:02:30] percent of employees work time. But what we’re really interested in talking about is the technology’s impact on learning and development. So in your opinion, what impact has it already had in the L& D landscape over the past year or two?

[00:02:45] Stella Lee: Yeah, I mean, it’s clearly the hot topic, right? And I mean, things are happening so fast, as you say, it’s still It’s still happening. So when you talk about impact, it’s very difficult to see because we’re still I [00:03:00] think last year we’re just reacting to all the innovations. And I think this year we’re slowly taking on a bit more of a proactive approach, if you will, and there’s a lot more tools available.

[00:03:13] Very cheap. Some are free, some are low cost, and I see people start using them, trying them out. I think the impact from, from an instructional design, for example, perspective, if we look at the traditional model, ADDIE, the Analysis, Design, [00:03:30] Development, Implementation, and Evaluation model, which to the large extent, it’s still what we follow.

[00:03:38] Every phase of ADDIE, we are seeing AI tools being embedded into the process, right? For example, analysis, looking at identifying skills and knowledge gaps. Now there are AI tools that can help us ingest and understand some of this information, some of the data that’s [00:04:00] been collected about your learners, about the organization, about the learning content.

[00:04:05] Now there are AI tools that can help you analyze that data. Again, design and development. As you see, we use AI for ideation. I certainly use it to say, Hey, give me five alternative titles for this talk, or give me a couple other activity ideas that, you know, that mirror the one I just put in. I [00:04:30] want to clarify, though, when we talk about the The rapid advancement of AI, really, we’re talking about generative AI, which is a subset of AI.

[00:04:39] AI has been around for over 60, 70, some would argue even longer years. And I think we should keep a broader vision. of AI as a field. I mean, generative AI is doing a lot of really interesting, innovative platforms right now, but there are bigger AI tools and platforms out there. 

[00:04:59] Host: [00:05:00] Great. Thank you. I do want to move on to the implementation of AI, which I know you’ve talked about and you’ve written some really interesting articles on.

[00:05:10] So when organizations are looking to start implementing AI into already existing learning structures, what kind of steps should they take in order for this to happen successfully and as smoothly as possible? 

[00:05:24] Stella Lee: Yeah. A good question and a big question. I think [00:05:30] for implementation or adoption, the first step is just any technology project, right?

[00:05:36] Or any business project that you know. It’s to understand like, what’s your current state? Where are things at? Because implementation of anything doesn’t happen in a vacuum and you have to understand what’s your tech stack capability and capacity. Also understanding the culture and the context you’re implementing this tool and not just about countries, but also, you know, Subcultures within [00:06:00] industries, right? And even within departments, different domain expertise of different expectations and mindsets about AI and other things. So you have to understand that. What’s the mindset and the readiness and the capability of your leadership team?

[00:06:16] Are they supportive of that? You know, has that been communicated to the organization? And never mind all the technological readiness. You might call it the ecosystem of different pillars or buckets of consideration. [00:06:30] I’ve written an adoption framework whereby I look at culture, I looked at the operational perspective, I look at this from a more strategic perspective, and also, you know, you have to look at, did you have existing, Policies on data privacy, like what, from a governance perspective, right?

[00:06:47] Do you have these things in place? And also your risk. Do you have any risk assessment and management policies in place and what are your best practices within that? And that have to be aligned also with your current [00:07:00] business practice. 

[00:07:02] Host: And when there’s so many vendors offering different AI tools, what key things do learning and development professionals need to evaluate when selecting the right tech to fit their organization? 

[00:07:15] Stella Lee: Yeah, that’s tricky, eh? I mean, for example, there’s a website called Futurepedia. There’s like, I don’t know, 20, 000 tools there already. So it is very, like, it’s overwhelming. From a personal learning and development perspective, I think it’s a good idea to look [00:07:30] at that.But if you’re trying to, like, procure a toll for a specific purpose for an organization, or even for yourself, I never start looking at tools. I start with, like, what do I need? Right? It’s like shopping for a car. You don’t just go to a car dealer and blindly look at hundreds of vehicles and say, Oh, what looks good, right?

[00:07:49] Or what looks interesting. It’s good to do that. I think just to give you a broad overview. But if you want to drill down to the specifics and say, Why am I buying a car? Like, what is it? What do I need to use it for? So start with your own use case. And also you need to be really clear. Are we implementing AI just because our competitors are implementing it?

[00:08:10] Or are we just keeping up with the Joneses? Or do we feel obligated? That’s never a good place to start implementing AI. Because it’s going to pull you in directions that might not be beneficial to your business. So you want to start from like, what does your business need? Is this creating new opportunities for you by implementing an AI tool? Is this addressing specific pain points that you’re experiencing? Is it solving problems for you? 

So, start with identifying All those, it could be all of those things. It could be one specific thing, but you need to know, and you know, no AI tool is going to solve your problems, all of them, right? So, and no AI tool is perfect, but you want to drive that selection process based on your needs.

[00:09:02] Host: Those are some really helpful guidelines. When it comes to specifically course creation, in what way can AI help to provide more immersive, personalized, and engaging courses? 

[00:09:16] Stella Lee: Yeah, I think, um, you know, there are multiple ways to do it. I’d like to emphasize this still, we should be driving that right? Like, people that have expertise in learning experience design in instructional design should be [00:09:30] driving it because as far as I can see the AI output needs a lot of human corrections and oversights. But it can help with, I’ve seen platforms that help you write learning objectives. I’ve seen AI tools that suggest activities or suggest different scenarios if you perhaps writing scenario based learning experiences. You can provide a, the basics, like perhaps bullet points, and it could help you create [00:10:00] scenarios.

Again, it, it really based on your prompts and where the model is drawing its database from. Be mindful of perpetuating some of the myths, I, I still see so many outputs referring to learning styles, for example. And it’s because the web is full of those misinformation that there’s still even university websites that talk about learning styles, and that got ingested into the LLM and spit it out as [00:10:30] output, right? So we need to be mindful of even if we’re giving you citations and references, they might be inaccurate. But it does help because it helps you think about Perhaps there’s scenarios you haven’t thought of, and you can add to that. I’ve seen AI tools that can help you writing quizzes, uh, questions and answers and provide feedback or suggested feedback. I’ve seen AI tools that suggests mode of delivery, like oh, for this topic, have you [00:11:00] considered a podcast with the following outline, right? So those are good, but nothing an experienced learning designer wouldn’t know and wouldn’t come up with. With AI tools, I think sometimes it’s faster to get there.

[00:11:15] Sometimes it gives you a different perspective. Sometimes it gives you things that perhaps, you know, your tired brain at the end of the day haven’t, haven’t thought of that or sometimes what I like to call a blank canvas syndrome when you start a project and sometimes it feels daunting when you’re starting from scratch and you’re facing like a blank page, a blank canvas, and sometimes it’s even if it’s not a great start, it gives you a start.

[00:11:41] And then by having, you know, a few paragraphs of that, I can start editing and playing and moving and cutting and pasting. And then next thing I know, I’m like, okay, I’m actually turned this like, okay output into something that’s better. I think if anything else, I think we need to be even more vigilant [00:12:00] of how we go about doing our work. Because it’s not just our output, it’s AI augmented, and that part is hugely unpredictable. So we need to be the one to discern this information. Yeah, so yeah, critical thinking skills are a must. 

[00:12:20] Host: And that’s a great lead to my following question. 

Stella Lee: We didn’t plan this. 

Host: It really wasn’t on purpose. I wish it were.

[00:12:29] But do you have any advice for companies that have made the leap and have integrated AI tech into their L& D structures, but are sort of unsure of their effectiveness and how can they effectively measure their impact? 

[00:12:43] Stella Lee: Yeah, again, I think it’s so early. So sometimes There are some indicators, but I would say measure them, but also don’t end there because sometimes you don’t know until a year down the road, right?

[00:12:56] Again, depending on what you’re trying to measure, I think the basics, the [00:13:00] easier things to measure, it’s of course like how many people use it, how many people actually access it. Is there a good adoption rate? But that’s just one metric. Like, that’s the one that it’s quick and easy to measure and it gives you some indication if nobody uses it or low adoption rate, [00:13:14] it’s also telling you there’s something wrong, but then you might, then you need to dig deeper and say, or even if a lot of people are using it, then you want to know why, how good is it? Are they using it in the way you intended it to? And sometimes unintentional [00:13:30] use is a good thing too, but then you need to know what are they using it outside of the intention.

[00:13:35] So I think I always, always advocate mixed method measurements, so not just quantitative, which I think a lot of the measurement conversation tends to gravitate toward like numbers and percentage and, and things that, and I get it. It’s easier to understand, but that’s just the first layer or the first cut of the story.

[00:13:57] You want to then dig deeper into [00:14:00] looking at the organization in general, has it, has it make any business impact, has the implementation on the introduction of the tool made a difference, perhaps from a team to team, department to department basis, but also across the organization. And this goes back to my original point about, well define your goals, like, what are you trying to do? Right. And if you are clear at the beginning of what you’re trying to do, then it’s easier to measure to what, a post implementation to say, well have I reached this [00:14:30] goal? There are many, many ways to cut and dice it. My advice for measurement, as always, is don’t go for the easy measurement only. I think you can as a first step, but don’t stop there. Impact sometimes takes a long time. So do early measurement, but also follow up and measure multiple things. 

Host: We’ll get back to our chat with Stella in just a moment, but first, we wanted to share some of our top tips when it comes to AI and the learning experience. 

At TalentLMS, [00:15:00] we’ve always been focused on using technology to make work and life simpler and we’re excited about the ways that AI is allowing us to help our users free up their workload So that they can better leverage their time. So far from fingers crossed AI making us redundant, it can actually give us more time for creative and worthwhile tasks.

[00:15:22] When it comes to course creation, try thinking of generative AI as your assistant, from kickstarting the outline for an immersive course, [00:15:30] to sharpening up the material you already have, using an LMS with AI capabilities  means you can source suggestions, visual aids, photos, quizzes, and even help with the tone of the content, all within the platform you use to deliver your training.

[00:15:47] And that’s just course creation. AI can also be helpful when it comes to employee experience. There’s nothing worse than receiving generic feedback or being retaught the things you already know. It’s no secret [00:16:00] that personalization makes the learning experience that much more engaging. And now, with the right AI tool, learning pathways can be adapted based on the learner’s progress, aspirations, and preferences, and smart recommendations can be made without any extra work.

[00:16:17] Something else that’s often time consuming, but a must, is keeping everything up to date. With an AI assistant, you can have a finger on the pulse when it comes to adding new resources, ensuring all of your training [00:16:30] material is not only engaging, but relevant as well.

[00:16:41] And now back to the AI literacy that you’ve also talked a lot about. A lot of how AI is affecting the workplace and employees, a lot of it has to do with how AI literate those employees are. So how should [00:17:00] organizations approach upskilling their employees and getting them to a level of AI literacy and fluency?

[00:17:08] Stella Lee: Yeah. And thanks for noticing my framework. Sometimes, you know, I don’t know, um, if people are actually finding it useful, but I’ve been getting a lot of very good feedback on that. So thanks for asking this question. I think you can use the AI literacy framework in, in, a few ways. The first one is to kind of measure existing knowledge and skills and attitudes.

[00:17:32] So you can use the framework. The framework, I have seven key areas for L& D. I initially designed for L& D and educators, but I have now evolved for everybody to use it outside. Even outside of education and the L and D world. So then for the general purpose, AI literacy, I have six key areas. So there are AI fundamentals, the basics of AI, right?

[00:17:56] You need to understand when people talk about machine [00:18:00] learning and large language model, what are those things? And then I have data fluency. AI, it’s nothing without data, and you need to understand a little bit what’s happening under the hood. Again, not trying to get everybody to be data scientists. [00:18:16] Some people, you know, including me, I’m not interested in crunching data all day. But I need to know, like, what’s, like, why, why, do these models make mistakes? So, you know, where is this data coming from? How does it work enough [00:18:30] so that you can again asking probing questions and informed questions. And then the third area we talked a little bit about, [00:18:37] it’s the critical thinking and fact checking because these models are not perfect because they have vast impact because there are many, many challenges and limitations. We need to put on our critical thinking hats and know where things might go wrong to know what are some limitations and how do we fact check?

[00:18:57] How do we verify sources? [00:19:00] And then I have the diverse use cases and that’s to get people to think outside of their own immediate domain to look at how different applications from across industries are using AI differently. And I originally have one on AI pedagogy specifically for educators and L& D professionals to understand how AI impacts our field.

[00:19:23] And ethics is another big topic that follows privacy. Resources, exploitation and biases and, and, and everything else, environmental impact fall into, and policies into this, this umbrella area. And then finally it’s the future of work. It’s almost like a more philosophical angle as well. Like, like what makes us human?

[00:19:44] What are some qualities we, we need to make sure that we maintain like creativity and innovation? Uh, are they distinctively human? If AI were to replace some of our jobs and tasks, how we’re going to prepare ourselves, how do we upskill and [00:20:00] reskill to be better prepared. So I think I often use the model like, so the model also comes with a breakdown of different competencies for each of the key area at three levels.

[00:20:09] And so I often use that to say, well, again, assess where your people are at, where your organization’s at. And you can tailor that to different groups within your organization because I, my guess is, is there are already people that are using AI in your organization that are really keen, [00:20:30] and they already incorporate that into their routines, and perhaps not at work, but personal life.

[00:20:36] So you don’t want to start from scratch with those people. And then there’s people that are still very anxious, very afraid, and don’t know where to start with AI, and perhaps don’t want to admit it also. So you might want to gently ease them into some of the basics. One other thing I’d like to start with is, I actually dislike the term artificial intelligence [00:21:01] It gives you an unrealistic expectation of what it is, like, It’s not intelligence, and, and perhaps it is to some people, but what’s the definition of intelligence, right? If you look at literature, there’s over 60 plus definitions just for human intelligence. So we can’t agree what it is, never mind machine intelligence.

[00:21:26] So I think even just to have that conversation, to kind of, [00:21:30] ease the anxiety, I think it’s particularly critical right now where we are blurring slightly what we’re seeing in movies and what we’re experiencing now. And, and I like to emphasize what we’re experiencing now, it’s still not just general sentient machine that can think for itself. It’s still not that. And, and so even the basic AI literacy about [00:22:00] differentiating facts from fiction, what is possible right now with current technology and what’s still fictional. It’s a helpful start to that conversation. 

[00:22:10] Host: And you also talked about reskilling and basically because of the, the change AI brings, a lot of employees will have to reskill.

[00:22:23] And there’s that extreme end of the scale where it might change what skills we value in the workforce completely [00:22:30] and how we hire. So what is your view on the skills we might see become more important? 

[00:22:37] Stella Lee: Yeah, I think we also have to be careful with reskilling and upskilling. It’s not straightforward because people’s identities are wrapped around their work. And, and it’s, it’s hard to lose that. And, and suddenly being told that you’re not this thing and you have to do this other thing. So I think [00:23:00] giving people agency is critical. I think so much of AI feels like it just happened upon us. So I think a critical thing is, it’s giving people agency and control of upskilling and reskilling.

[00:23:15] I also think, yeah, in terms of skill sets, like I mentioned, as L& D professional, if anything else, I think we need To hone our expertise even more, just because there’s all these like misinformation, all these false [00:23:30] outputs out there, there’s all the uncertainties of how we’re going to select an AI tool. I think, for example, L& D, what we need to get better or acquire newer skills is get involved in the procurement process, right?

[00:23:46] Get involved in help selecting and implementing AI tools. Because we are the ones that work very closely with our end users, with our learners. we can advocate for them. We understand what’s happening on the ground. [00:24:00] Sometimes when decisions are made about procuring a tool or implementing a tool, it’s a bit disconnected from people that are actually using it so we can be that bridge. 

I also think our L&D could be upskilled in getting more involved with policy and governance. It’s still a huge gap, therefore a huge opportunity for us to say, and we’re good we came from writing instructions and developing and facilitating instructions, right? And learning. [00:24:30] What a better fit than helping developing policies and communicating and educating people about them. 

[00:24:36] Host: Great. I want us to take us back a little bit because we already touched on this a little bit already. Ethics. It’s of course, another huge consideration, as you mentioned before, and you’ve argued in the past that the ethics of AI need to be considered more carefully than the tech ethics in general.Why is that? 

[00:24:58] Stella Lee: Wow. Another big, big, big topic that’s very close to my heart, so I love that you brought it up. It’s, I think ethics is always, it’s always critical. It’s more critical now because the consequences could be severe.

[00:25:17] It’s more difficult to unpack precisely because we don’t know a lot of what’s going on behind the scenes. We don’t know, or we know and we have no control over. For example, like the biggest talk the past couple of weeks is Google overview, suggests that people eat rocks. It’s good for your health or that you can use glue to make pizza sauce to make it thicker.

[00:25:42] And it turned out that it was ingesting an an 11 year old Reddit post from some guy somewhere to, I think, you know, I don’t know if, like, Reddit is full of sarcasm, but machine doesn’t differentiate that, right? It doesn’t get sarcasm, it doesn’t get jokes, but it’s just ingesting it as facts and spit it out.

[00:26:02] Also, AI, it’s, actually a tool that has a huge environmental impact. Imagine the processing power, the water it needs to cool, uh, the electricity, the, the, we don’t often think about the physical needs of, of a cloud based tool. But lithium, for example, is being mined to death around the world. And that’s another concern.

[00:26:27] So it, it, it hits on so many different areas of ethical concerns. And at a scale that I haven’t seen before within the community to talk about it. So I think now we need to think about even more carefully than before, because the rapid change, the vast impact, and the consequences could be a lot worse. And the fact that it’s not transparent, or the fact even if it’s transparent, we might not have the agency to do anything about it to push back. 

[00:27:03] Host: And you’ve already sort of alluded to the role of L&D in ethics and ensuring that ethics and policies are being upheld. What other departments or who else’s responsibility is it within the organization to ensure that they are being upheld.

[00:27:25] Stella Lee: I think everybody should get involved in ethics. The reason, for example, biases is such a problem is because AI, chat GPT, and other large language models ingest the internet. Who are the majority of users on the internet? Mostly, they’re more, they’re higher percentage of male user than female users. So that perspective is already skewed.

[00:27:51] And we need more diversity of voices. And by diversity, I don’t mean just gender, but age, domain expertise, cultural background, all kind of diversities, right? Life experiences. And I think with even within a smaller organization, don’t just get like a tech person, don’t get just an HR person, you know, make sure you have a person who is your end user or your target user and into that discussion, put it as a working group so things could evolve and you can swap out members.

[00:28:26] And I think it always has to be coming [00:28:30] from different perspective and to share that as a, you know, and working towards ethics is a common goal. It’s the best practice in my opinion. 

[00:28:39] Host: And as workplaces continue to evolve, how are the roles of learning and development and HR professionals going to change in the next five to 10 years to remain relevant and complement the tech that is being introduced?

[00:28:54] Stella Lee: Your guess is as good as mine. Five to ten years, you say? I think [00:29:00] what is, what to keep in mind is, I think AI really forces us to re-examine our roles. What is still, I think sometimes L& D could be quite traditional. I think sometimes we kind of get, I don’t know if it’s complacent or we just get used to doing certain things over and over again, right?

[00:29:20] We’ll facilitate workshops. We’ll work to create this learning content. We’ll work to do some coaching sessions and this is good. It helps us think about, are those still needed in our organizations, the way we design and deliver and support them, or are there better ways now to support L&D, to support performance?

[00:29:43] Another conversation is the deprofessionalization of the field. Like, wow, if AI tools can do these things, everybody can be an L& D professional. Everybody can be an instructional designer. So, how are we going to, um, define our profession? How can we set some standards? So I think it’s a good opportunity to, to look into that.

[00:30:05] In terms of like, what’s going to happen in five to 10 years time, nobody knows. I definitely don’t know. But, I think how we can better prepare ourselves, well, of course, having AI literacy is, you know, to build that foundation, but also having the right mindsets, like to kind of look at things in a way that, you know, things are going to change no [00:30:30] matter what, we don’t know what, but we have to keep that open mind, like about things will change, but also, be ready to say, okay, that’s how I understand it.

[00:30:42] But six months later, things will change and I have to update my understanding. I have to update my assumptions. I have to say, well, this is what I think and that thinking could evolve. And it’s completely okay to evolve. And the third thing is to keep experimenting, to keep trying tools. Yes, many tools are still mediocre, but that might change. It might even get worse. There are a lot of talks to say as large language model ingesting its own output back into the system, it might deteriorate the quality. So it could get worse. That’s, that’s a likely, you know, that could be one likely scenario too. And so watch out for that too.

[00:31:25] If, if it gets worse, how can we, you know, do some damage control or step in and edit and do some correction. So I think just to stay open, flexible, and ready to experiment and try out things. It’s the way forward. 

[00:31:45] Host: Perfect. Thank you. And we have one more question for you. 

Stella Lee: Sure. 

Host: Which is a little hard and I’ve already thrown some really tough questions at you that you’ve handled beautifully.

[00:31:58] Stella Lee:Now you’re scaring me. 

Host: Apologies. I’m sorry. But we have to ask, staying true to our name of Keep It Simple. So in one sentence, how can we keep things simple when it comes to integrating AI into our learning and development practices? 

Stella Lee: Well, start with what your needs are. 

Host: That’s perfect. [00:32:24] Thank you so much. It’s been such a pleasure talking to you. This has been so interesting. I’m so glad that we’re kicking off the second season with you as our first guest. 

[00:32:35] Stella Lee: Oh, of course, of course. No, it’s always a pleasure. And I’m always coming from, like, if there’s something I say I can share, that’s helpful, then I’ve done my job.

[00:32:51] Host: TalentLMS has harnessed the power of generative AI to bring you a training experience that’s more seamless and intuitive than ever. Talent Craft, our AI powered content creator, can build courses with just one prompt, wave goodbye to tedious content development, and then And say hello to a delightful AI powered learning experience for you and your teams.

[00:33:18] Thanks for tuning in. In the next episode, we’ll be looking at how to design training courses that don’t just go through one ear and out the other, but actually stick. You can find Keep It Simple on all podcast platforms. Be sure to subscribe so you don’t miss an episode.

This episode of Keep it simple was brought to you by TalentLMS. The training platform built for success and designed with simplicity in mind. For more resources on today’s topic, visit talentlms.com/podcast.

Train your people. Measure results. Drive growth.

TalentLMS gives you the tools to supercharge every step of your training.

.talentlms.com

Already have an account?  Login