In this episode of CGI’s From AI to ROI podcast series, host Fred Miskawi, Vice-President - AI Innovation Expert Services Lead at CGI, is joined by Gaby Lio, John Bailey and Russell Goodenough to explore what it truly means to be AI literate and how organizations can equip their workforce to confidently adopt and apply AI tools across all roles.

The podcast covers real-world insights from client engagements and CGI’s own internal adoption, including defining AI literacy, developing AI literacy for individuals and teams, and measuring success beyond trainings. Together, they explore what’s missing from today’s AI literacy narratives and offer practical advice and key takeaways for future-proofing teams and organizations.

Key takeaways from the episode:

1. AI literacy is more than just trainings.

Being AI literate goes beyond trainings or a specific checklist. It includes hands-on practice, good judgment, and knowing when to trust, challenge or complement AI output, and how to apply it within your role and organizational context.

“You know enough to be dangerous, you use it in your day-to-day, and you can decipher if you can trust the AI output.,” shares Gaby Lio.

2. Success metrics must go beyond training completion and adoption rates.

Best practices in measuring AI literacy include a mix of objective measures, such as trainings, and subjective measures, such as self-assessed mastery and qualitative feedback in trainings. True value is seen when AI literacy leads to tangible business outcomes such as faster project delivery, increased productivity, and even improved employee wellbeing.

Gaby Lio explains:, “It's not just adoption, but what are the outcomes from that adoption that we are seeing?” Fred Miskawi says:, “Value-based metrics, business metrics.”

3. Real-world training and application, as well as empowering employees and change management, drives adoption.

Organizations with higher adoption rates tailor AI training to specific roles and workflows. Practical use cases outperform generic guidance in driving confident usage.

“AI literacy means familiarity and it means being comfortable and confident using day-to-day tools that are empowered with AI. And I think the only way that you can achieve that is by putting the AI in the hands of the users.,” explains Russell Goodenough.

4. AI literacy is for everyone, not just those working in technology companies or specific roles.

With AI being such a prevalent and general-purpose technology in its current trajectory, it will be important for everyone to understand what it is and how it can be used, not just those in specific companies or roles.

“Everybody needs to have a good understanding of this technology, the risks, the benefits, the opportunities. I think that it's kind of like the new media literacy, but on steroids.,” shares John Bailey.

5. Future-proofing includes choosing a lane and fostering an innovation culture.

Make AI tools ubiquitous, foster top-down leadership support and bottom-up learning, and give employees time and space to experiment. Understanding your role and the tools and techniques you can apply as a starting point, and fostering a team culture of sharing and exploration, helps organizations stay resilient with rapid change.

John Bailey elaborates, “AI literacy is a competitive enabler for many organizations, This way, we try to increase the understanding or perspective of the workforce. And through that, then maybe tickle the fancy or give this kind of hunger in the organization to really search for more.”

Learn more and subscribe

Explore more episodes of From AI to ROI and learn how AI is transforming enterprises and government organizations. Visit cgi.com/ai for insights, resources and updates on AI-powered strategies.

Read the transcript

Introduction

Fred Miskawi
There's a question I've been hearing more and more. How do you know if you're truly ready for the AI era? Not just ready to use it, but ready to trust it, challenge it, and actually get the best results from it. But here's the reality. Most people think they are until the first time an AI gives them an answer that feels right, but it's actually wrong.

Welcome to the podcast From AI to ROI, the podcast where we cut through the noise to share our practical experiences and projects where we're finding real value in AI and how you can too.

I'm your host, Fred Miskawi, Vice- President, AI Innovation Expert Services Lead at CGI. And in today's episode on AI literacy, we'll break down what it really means to be AI literate, ways to develop it for yourself and your teams, how to measure what's working, and how to keep it sharp in a world that never slows down.

AI is evolving faster than most organizations can keep up and working effectively with it is no longer optional. It's a baseline skill. And AI literacy isn't just about learning the tools.

It's about mastering two superpowers, context curation, giving you the right information at the right time, and intent framing, shaping clear purposeful requests that drive the outcomes that you need, whether you're building software or steering strategy at the highest level.

Today, I'm joined by three people I work with on a regular basis who live this every day, hailing from different parts of the world. From the great state of Texas in the USA, we've got Gaby Lio, who brings her data science and AI expertise and hands-on perspective of AI and client delivery and transformation. From Finland, John Bailey brings decades of experience in ideating, designing and developing digital and AI-assisted services. And from the UK, Russell Goodenough focuses on making AI practical, measurable, and trusted across industries.

Welcome to the podcast, by the way. It's been amazing working with each of you individually. We'll go through maybe a little bit of personal introductions and I'll start with you, Gaby,

Gaby Lio
Awesome. Thanks, Fred. Delighted to be here. It's my first podcast, which is exciting. As Fred mentioned, I am from the great big state of Texas. I lead our AI team down here and I am very much engaged in client delivery. So, when it comes to building up a team that can build a custom AI solution, I help not only figure out the right people that need to be on that project, but also what does the architecture need to look like? What's the real business problem we're trying to solve?

What are the metrics we're going to track when this is in production to make sure it is successful? And I advise a lot of clients across different industries on truly what is the right AI solution. I think there's a lot of thoughts out there that AI can solve everything. It is really good, but it's not always the best solution. And so, figuring out maybe there's some other technologies that can solve what's needed or maybe it truly is AI. So, I do a lot of that in my day to day.

Fred Miskawi
Thank you, Gaby. And that's a lot of what we do, right? Finding the best tool for the job based on the particular problem at hand. We're problem solvers and we've got a new generation of tools that we have at our disposal. Russell, let's go over to you. Personal introduction. Here, Russell.

Russell Goodenough
Hi, Fred. So, I'm head of artificial intelligence for CGI in the UK and Australia. I look after two aspects of my job. I look after our own AI enablement programs. So that's our own absorption of generative AI technologies in the workplace. And what I try and do is learn lessons from our own work internally and convert that into useful bits of learning and insight for our clients. I also spend an awful lot of time working with my four-year-old son on his Lego model, so it's not all work, work, work.

Fred Miskawi
I love Legos. It's a great way to get a new set and then work in family and building something together. It's a lot of fun. John?

John Bailey
Yeah, hi Fred, thank you for the invite to the podcast. My name is John Bailey. I'm director of AI here from CGI Finland, part of our Innovation Center of Excellence here. And in my day-to-day work, I try to help our clients move forward on their AI journey. So, that can be anything from AI strategy to use case identification to AI literacy, the topic of today. And I'm really looking forward to the discussion here today.

Fred Miskawi
And when we talk about Finland, a powerhouse in artificial intelligence, at least for us within CGI, when it comes to bringing people up to speed, learning, experimenting, absorbing, you guys have been amazing. It's been amazing working with you guys as well as with the UK and the U.S .CSG for us, the United States.

What is AI literacy?

Fred Miskawi
So, I'll go to maybe the first question on this particular topic. We're talking about AI literacy today. So, we hear a lot about AI literacy. We get a lot of questions about how literate, although it's not necessarily that question. It's more in terms of how many people know AI. Much more kind of generic questions on that topic. But we hear a lot about nowadays on that topic from our clients and the market. We even sometimes get the question about what percentage of our consultants are AI literate, which is a difficult question to answer, you would think, but depends on how you define it. So that's part of what we're going to talk about here in the podcast. But it seems to be a term that means a lot to a lot of different people. So, Gaby, we're going to start with you.

Gaby Lio
I think that's hard to answer as you alluded to earlier because I think if we were to think that AI literacy was that people know about AI tools, they're trained in AI tools, I would say almost 100 % of our consultants are at that level. But to me, that is not fully encapsulating, you know, what literacy really is. I think that's part one of what AI literacy is.

Part two is, okay, you've been trained on it. You know that these AI tools exist. Are you using it? Is it truly your companion in your day to day? Do you have hands on application using it? And are you encouraging other people to use it? And do you know when is the best time to use it? And then lastly is, do you know, and you alluded to this as well Fred, is what the AI is giving me the right answer? Do I need to follow up on that? Am I just taking this as truth? Do I know a better tool from an AI perspective that I can use to get to the right answer?

So, to me it's really threefold. You know enough to be dangerous. You use it in your day-to-day, and then you can decipher if you can trust the AI output or if there's another tool you should go to try to get closer to the output that you were looking for that you need.

Fred Miskawi
I like that. We know enough to be dangerous and that is certainly very accurate. So, John, in Finland, when we hear what Gaby's mentioning and the importance of practice and actually getting things in production, using it and learning from it and you've got clients coming to you and they're saying, “Hey, we need our population to be AI literate.” How would you define it first and what would you recommend?

John Bailey
For me, I would take a step back even and talk about data literacy because I think that's the first stepping stone in the whole big picture. I think at least from my perspective, that's where I started this whole concept of data literacy and then building on that to become AI literacy. So, data literacy is the ability to read data and to question data and communicate data. And then AI literacy on top of that is how do we then understand the results of that data in a system and how can we utilize those systems and specifically now within the topic of AI?

But I would say that AI literacy is not only this kind of training or program, it goes a bit beyond that. So, it's, I would say, a competitive enabler for many organizations, this kind of way we try to increase the understanding or perspective of the workforce. And through that, then maybe tickle the fancy or give this kind of hunger in the organization to really search for more. And that's when I know I've succeeded in the trainings or workshops that I deliver when people ask for more. And with that kind of hunger, you start to look at your environment in a new way. You start to look for places to implement this kind of capability that you maybe didn't understand before, maybe even had some anxieties for.

Fred Miskawi
A hunger, desire to learn, the desire to experiment and absorb. You're mentioning data literacy and I was mentioning context curation. In a lot of ways, these two are tightly intimately connected. The idea is to understand the importance of data to these solutions. Even though when you had to provide systems, data quality, relevancy, the ability to understand the data sets and the features in those data sets was very important.

A little bit of that dependency has been reduced with the new generation of unsupervised algorithms, but it's still there. And understanding data, the nature, the hierarchy, and being able to put it in context has become that much more important. So, Russell, jump in there. What do you think?

Russell Goodenough
Well, I was listening to both Gaby and John there and I was reflecting on some of the observations I've made while we've been rolling out generative AI for everybody. What we've tried to do in our teams is make it ubiquitous. And I think that's perhaps what a lot of our clients are trying to do as well. And sometimes we all work in the tech industry and it's easy for us to sit with a lens of a kind of power user or a super user.

And we talk about AI literacy from the particular standpoint of somebody who's steeped in the technology industry. And I don't think that's necessarily representative of absolutely everybody. Think about people who are working in retailers or manufacturing or civil servants working in government departments, not as steeped in the technology as we are.

AI literacy means something different for those communities. It means familiarity and it means being comfortable and confident using day-to-day tools that are empowered with AI. And I think the only way that you can achieve that is by putting the AI in the hands of the users.

And we think we've measured at least the subjective measure of AI literacy stepping up in the organization. So, before we give somebody access to the tools, we ask them, do you think you're a novice, beginner, intermediate? Are you advanced or are you even an expert in AI? And then six months down the line, we ask them again.

And we've noticed that in that six-month period, by far and away, the majority of people self-refer or self-assess themselves as improved on their understanding, insight and just general confidence being able to use AI tools. And I think that probably means more for the general population. That probably means more for all industries. Are you comfortable and confident with what you described as a wave coming across us, Fred. Are you ready for the wave?

Measuring AI literacy and success

Fred Miskawi
You're mentioning surveys. You're mentioning the ability to understand how the user community is reacting, absorbing and evolving with the change. So how do we measure that? And for those of you that are listening, a lot of times when we come in with a new solution, one of the first things that we know a lot of time, every time, first thing that we do, we set up and establish KPIs. These are the things that we're going to be measuring. And this is how we're going to know that we're being successful. So, for each of you in your geographies, how are you measuring the success that we're having in training and showing this technology and getting it absorbed?

Gaby Lio
I can start and I'll give an example to start with more traditional machine learning and setting up metrics and how this kind of translates to metrics. Maybe we would want to track here, but in traditional machine learning, when you build a model, there's something that you're trying to predict. Let's say you're trying to predict customer churn, the probability this customer is going to churn. The reason you're building this model is because you want to save people from churning. So, the metric you would track to is yes, we have this 90% accurate model, but were we able to actually save people from churning with this model? If not, it's great that people are using it, but it's not really giving us the insights or the outcome that we want. And I view AI enablement with our AI tools the same way. It's great that people are using it. It's great that maybe we have 90% adoption.

But what is the true outcome we're trying to get to with that adoption? Are we winning more bids because we're using AI to parse through our historical bids and therefore, we get better information when we're trying to sell our services? Are we executing projects at a smaller, faster scale, right? Because we've been able to use AI to help our developers.

To me, those are the outcomes I would like to track to specifically in the context of our organization. It's not just adoption, but what are the outcomes from that adoption that we are seeing?

Fred Miskawi
Value-based metrics, business metrics. Absolutely. I agree. Russell, John, when you're looking at that, you've got a population that's absorbing these tools. Russell, you've been championing and pioneering the use of ChatGPT Enterprise. How do we measure how our user community, our employees globally, how well they're absorbing this, how well they're using it. How do we know that they have received the right level of information that they need to be successful and achieving the KPIs that Gaby was just mentioning?

Russell Goodenough
Yeah, I do think it's very, very difficult. I agree with Gaby there. I think you need to take different approaches. We've definitely relied on subjective measures, allow the users to self assess. In AI literacy, I mentioned, but also how much time have they saved or what is the increment? How would you measure the improved quality that they've applied to their work products?

We also try and measure some other features which are health and wellbeing. Do you know, is it improving the way our consultants feel about their daily working practices? So, we've tried to measure all sorts of facets of AI adoption, not just because we want to prove a return on investment, we want to prove that it's helping us be better.

So, I think it's a bit of a balanced scorecard in that regard. It's subjective measures. But yeah, I think it is a complex task.

Fred Miskawi
Okay, so John, from your perspective, when we look at the way that we're measuring the nature of the type of engagements that we have with clients, how do you translate that into our client relationships, the way that we measure what we put in contracts?

John Bailey
I think for me, building on what Russell and Gaby already said and a combination of both, I think personally, because I actually drive many of these kind of workshops and I do a lot of client facing discussions. So for me, the first best measuring device here is actually the sentiment of the people we are working with. So, when we have a workshop and you see the eyes open up and say, “Okay, this is what we're talking about.” When you build a training framework for a client and then you can see they are reacting in the right way. And I think like we already discussed, it's a combination of having these softer and harder targets to hit. Because if you just look at the metrics, they tell one story. But then when we have interviews or we ask people to self-assess, we can see another story.

Further client learnings on AI literacy

Fred Miskawi
And with those client stories, we're, and we do that on regular basis, we work in tandem with teams. We upscale the teams we work with. And Gaby, Russell, maybe personal experience related to these types of client deployments and maybe a little bit of visibility over what some of our clients are going through as they're going through this topic of AI literacy.

Gaby Lio
Yeah, I can give two very different examples. One is with a client who has fully adopted AI. They have deployed over 20 different internal tools that they expect all of their people to use across all of their projects to create efficiencies. And so, things like this are being pushed down to the people that work on our client projects and we're being affected by that. And so, it's important that we're doing very role specific trainings to be able to help people adopt this more compared to our competitors when we're working on projects.

Another company I'm working with is on the far, far opposite side. They deployed MS Copilot 365 and they were talking about, “Hey, we don't even know if people are using it.” I said, “You know, what's your adoption rate? What are the people that are adopting it the most the skill sets that you have?” They said, “No idea, don't even know.” I said, “Okay, so you're paying for this tool for people to use but you don't even know if they're using it or if they're getting efficiencies out of their day-to-day.” And so, we partnered with them to create a training program and say, “Hey, let's look at all the roles that could benefit the most from this. Let's start with a pilot group.” We're not just going to train them on how to use MS Copilot for Excel. It's a construction company. So, we're actually going to take one of your project estimates. We're going to go through that. We're going to see how we can apply AI on top of that within Excel.

Again, I think it's very tailored specific trainings that are needed to even get to that level of adoption and efficiencies that we're looking for with our clients.

It also showcases a little bit that I think people are expected to be a bit more multifaceted in today's workplace because of these tools that we have. You're no longer just a developer. You're no longer just a project manager. Because you have these tools at your disposal, you are expected to do more with that and be able to know more. And so, I think that's something that we all need to start thinking about as well.

Fred Miskawi
I agree. And part of AI literacy for me, it's knowing when not to automate. It's not because you can automate now. We're getting to a point where we have to actively decide as part of AI literacy, what should be automated, what should be handled by these agents, by these tools versus what humans should be handling. And I think when we look at the deployment for ChatGPT Enterprise across the corporation, we're doing a lot of that learning, right? As we're creating custom GPTs, understanding and knowing, experimenting. Russell, have you learned anything from that perspective in terms of when not to use the technology versus when to use the technology?

Russell Goodenough
I think we're learning loads in every regard, Fred. Again, reflecting on what Gaby's described there that's true across all our teams, irrespective of industry anyway, is that the skills of coordinating and orchestrating the work, they change dependent on the maturity of the team and the stage of a team's evolution and the tasks.

I think one thing that's really hit home for me has been, in order to accommodate the pace of change at the moment, we've needed to work slightly differently. In order to be able to absorb the tech and change our working patterns quickly, I don't think it's within the skill set or I don't think it's within the gift of a team at the center of the business to dictate how everybody else should be working. So, I think some of those rules are breaking down. I think what we've had to do with ChatGPT as you mentioned is we've had to trust the wisdom of the crowd. We're crowdsourcing innovation. We're saying, “Okay, well, let's put the AI in the hands of our workforce and then let's learn lessons from a really broad set of our teams.”

And if I can cite an article that I read from a commentator called Ethan Mollick, he wrote a paper recently about making AI work. And I think that resonates really strongly with our own experience that we've had a leadership that's ready to role model AI adoption. But then we have got critical mass in the business to be able to learn those lessons about how different working practices should change. And then we've tried to industrialize those. Does that make sense?

Inaccuracies in using AI

Fred Miskawi
It does and enables us to scale for our clients to scale. And connected to that as part of the, this topic, it's not just learning how to use these tools, but it's knowing and understanding the shortcomings of the tools. So broadly for the three of you, how do we make sure knowing that there's a certain percentage of inaccuracies that come out? And these are statistical responses based on the context and the intent that was expressed. How do we teach our clients and our partners how to deal with that percentage of inaccuracies that invariably comes out with this technology?

John Bailey
I have a short anecdote on that. The thinking is that previously we talked about traditional AI being before generative AI. There are the systems. Many times they were used to try to fix or find errors that humans did in large quantities of data. And now it's the other way around. Now, people are trying to identify the errors and the problems that arise from AI. So, we are kind of switching the chair there between what we do and what AI provides in a system. So, I think that's one topic that is key. That's why I think also AI literacy is a very important skill. And here, meaning the ability to see through the fog. So not relying completely on the system, not relying completely on the inputs and outputs that are going through a system. And that's also why I usually say that expertise will be more valuable going forward. But when we talk about displacing jobs and changing jobs, I think that expertise will be an even more valuable asset going forward because those are the people who have the best opportunities to actually validate all of the information that AI is bringing to the table.

Fred Miskawi
What I speak a lot with customers about or clients about in different accounts is building healthy paranoia. When you're leveraging the technology, just assume that it's not going to be 100 % and that there's a certain amount of oversight that you need to bring in. And whatever the outcome is, you're responsible for it. And that healthy paranoia, in some ways, when we work together, the four of us, like whatever I say is not 100 % accurate and you're going to be double checking, you're going to be absorbing feedback from different people and make your own opinion. With this technology, it's the same thing. We're building healthy paranoia and layers of guardrails to make sure we address it.

Russell Goodenough
I think that's definitely true. We've allowed, because the tech has moved on so quickly, we've allowed artificial intelligence to have a sort of cache and a fear factor that comes with it. And lots of the rules for traditional change management have not gone by because it's artificial intelligence. That good communication, good insight and awareness. is part of introducing new working practices and it's part of introducing new technology as well. So, I don't think the fact that we're talking about AI means that those rules are any different. You've got to explain the purpose and intent and good training and awareness. As you're introducing any technology, AI is no different. Change management is 90% of the job. Change management is, and that's not changed.

What’s often missing in conversations about AI literacy

Fred Miskawi
Decades of experience in organizational change management are still applicable. The processes we've put in place to govern efficiently groups of humans are still applicable. We can use the patterns of the past and apply them to the patterns of the future.

I do have a question for you guys. Is there anything else missing in the discussion? So, there's a lot of content available out there about AI literacy. We belong to multiple groups and we've gotten even awards. But is there a particular aspect of AI literacy that you're not seeing as much of out there in the media and something that is very applicable to our clients and what we're doing on a day-to-day basis?

John Bailey
I say maybe that it is out there, but we don't talk about it as much, especially from the corporate perspective. This is for me, one of the key topics in AI literacy is that because AI is such a prevalent technology, it's a general-purpose technology, it will be everywhere. And it means that AI literacy is not only for the workforce, it's for society as a whole. We need to have a good understanding. Everybody needs to have a good understanding of this technology, the risks, the benefits, the opportunities. I think that it's kind of like the new media literacy, but on steroids.

So how do people act and behave and absorb information and trust in the future? Because there's more and more of the content we are seeing, we're listening to, we're reading is made by an AI possibly. How do we then filter through that and see reality anymore? And I think here specifically people who are outside of the working force, those who don't maybe have AI literacy trainings. I think luckily in schools they're already including these topics. And I think also on the EU level, there are a lot of pushes for this topic to have a societal level AI literacy. But I think then maybe other people are outside of that. How do we make sure that they also get this kind of understanding of the threats opportunities and possibilities?

Gaby Lio
And to piggyback off that, to John's point, we are seeing so many people across the workforce being affected by this. So, the accessibility is there for everyone. I think there's too much information right now, which is another thing that I don't think is talked about as often. When someone comes to me and they're like, hey, what should I focus on? How deep do I need to go? What tool should I use? There's so many different things and directions we can point them in. And I think it can feel overwhelming, especially with how many new things are coming out daily, how many new tools weekly, right?

o, my advice about this is pick your lane and make it purpose-built. I don't claim to be an expert in all things AI, and I'll give an example right now. Software development and using AI within software development, I know about it, but I'm not an expert in it. That's something Fred is an expert in, and I lean on Fred to help us with that.

I am more of an expert in custom built AI purpose solutions and how we can build AI solutions that get embedded into applications. And so, for the people out there that are where should I start? What tool should I use? What is the area that you're good at today where you feel like AI can help create efficiencies, purpose, build your training program and the tools you want to use around that.

Once you get comfortable with that, you can start expanding, but I don't think there's this need to be a million miles wide. You're going to get very overwhelmed very quickly.

Russell Goodenough
I think my point would be similar as well, Gaby. I think, Fred, you asked what's not getting covered. And I think lots of the coverage of artificial intelligence makes it sound like it's being determined by somebody else. And people don't talk so much about choice. It's our choice how to apply this powerful technology and to what ends we want it to be deployed.

Fred Miskawi
Great point.

Russell Goodenough
It's our choice whether we choose it to impact the efficacy of health care or perhaps the implementation of the judiciary in all our regions, And it's kind of a version of what you said, Gaby, that if you put it in people's hands, they could choose to apply it in their own lane.

But people aren't necessarily going straight to how do you use this AI for good. And in the absence of a good strong theme of how do you use AI for good, then it leaves room for fear factor and apprehension. So, I think we should be focusing on AI for good more in the narrative.

John Bailey
I think building on that and also the other side of the coin, we talk a lot about AI literacy and training and sharing. But then if we don't give agency so if we don't give people the opportunity to actually apply that understanding that they have, then it just turns into frustration. That's also why think that AI literacy should be a strategic project or a strategic push for any company, not to think of it only as some kind of upskilling, but also as a new opportunity for the whole organization in this AI journey. If you talk about AI, then provide some tools that you can use. Build sandboxes where you can try these things. Support that journey because otherwise people will just feel frustrated.

Fred Miskawi
Yeah, you're right, John. And going back to the basics, which is providing an environment that is conducive to learning and experimentation, where people are free to fail, fail fast and recover, learn and understand. Providing the right guardrails to make sure that they do it in a way that's safe, secure and trustworthy, with the right level of oversight governance to understand how well it's progressing. These are basics that we've introduced for any technology.

I think what we're seeing now is, it's an acceleration of the rate at which these changes are coming in. New models, new capabilities, new agents, new ways and approaches, new ways to migrate and do digital transformations. So, how do we continue to absorb these types of changes and learn with those changes? Because it's one thing to be AI literate as of six months ago. It's another to be relevant and productive today with the knowledge of six months ago. So, how do we become a little bit more future-proof and what do we tell our clients on how to be a little bit more future-proof?

How can we future-proof?

Russell Goodenough
When I'm speaking to clients, there's a definite understanding that you can apply artificial intelligence to big ticket items within a business, but there's less of an appreciation how you apply it to personal productivity and use that as part of the change management journey. So, familiarization at a relatively small atomic level as a prerequisite to doing big change in the organization. And I think there's a key to unlock the door here that once you have that critical mass, a number of people using it in a personal productivity sense, you've got critical mass to tackle big change projects.

And if you can do it once, you can do it again and again, even if the technology is advancing, even if the capability of our models is moving on, you've built that sort of muscle mass within the business to accommodate the change that comes with automation. Just make these tools more ubiquitous and that helps your organizational resilience.

John Bailey
I think Gaby already touched upon the topic of not really having to be knowledgeable of the state of the art in all of the roles and responsibilities and context. You need to focus on what you did yourself. So also from the organizational perspective, I think everybody should have the fundamentals and the opportunity to do refresh on that, something just to stay relevant on that perspective, but then role-based, deep dives into different topics and of course communities of people who you can work with.

Gaby Lio
I think do not put all your eggs in one basket. There is not going to be one magic solution that's going to work best for every single type of worker you have and problem that you're trying to solve. So, stay open-minded there and don't try to say everyone in the organization must use this tool. Give them a couple of options to use because guess what? One of those tools might come out with a new model where the other one won't and there's always going to be change going on. And so, you want to have options, I think is first and foremost.

Secondly, I come from a data science background and a very big believer in research and I advise a lot of our clients to have a team that is specifically focused on new innovations that are coming out. I'll give an example. We built a product for a client two and a half years ago that today would not have been needed to be built custom. It is something that's out of the box and available in almost every single solution. Is that tool still in play today? Yes, it's still in play, but we're finding ways to integrate the new tools that have been around now into the old tool. And who's doing that? A dedicated team of research scientists. And so maybe it doesn't need to be at that deep developer kind of AI implementation level, but there should be a group in your organization that is watching out for new releases in the market, tinkering with it hands on. I'm a person that really needs to touch and feel to be able to actually believe what I'm reading for a news article or something. And then have a committee that makes that decision, right? And that is on the lookout for that. So, that would be my suggestion.

Fred Miskawi
That's a good point, Gaby. It's one of the comments that I hear quite a bit, which is I want to learn these tools and I'm spending some time outside of my work hours to do it, but I need more time. I need time to embrace, to use, to experiment. And if we don't provide a little bit of grace to our employees, they're not going to have the time to absorb and be ready for what the future is bringing.

Gaby Lio
So, a part of that is also fostering a team environment where people are wanting to innovate. Imagine yourself on a team. You're five people. There's just one of you that is using AI in your day-to-day. You're trying to tell the people around you to use it. They're not really listening. They're doing their job like they do today. They're not interested. But what happens when that leader comes in and says, “Hey, Gaby, you're doing a great job using AI. Why don't you partner up with this person and show them exactly how you used it and how much time you saved?” That person that becomes a catalyst to go tell the other person and then the whole team starts using AI and you start increasing your efficiency tenfold versus just that one person that was doing it. So, that creates that culture of innovation and it starts from the bottom up, but I think it's really powerful when it comes from the top down as well and there's that direction of we all should be using it, we're all going to share about it and that way we build that innovation mindset.

Closing: Key takeaways

Fred Miskawi
Thank you, Gaby. So quick blitz round, last question for the podcast and to each of you. And that's, if you want our audience to remember one thing a week in the future, what will that one thing be on the topic of AI literacy?

Gaby Lio Pick your lane and start there. Know your job well. Know how you can use AI to improve efficiencies in your specific role and then expand from there. Don't get overwhelmed with everything that's in the market. Just start with picking your lane.

John Bailey
I think for me, AI literacy is kind of the best way to include the organization in the company's AI journey. But I think you need to be prepared to answer the demand when the ball gets rolling.

Russell Goodenough
Okay, so for me, I've come to recognize that AI literacy is a bit like any other sort of literacy. And you wouldn't learn how to be literate from becoming an expert in the alphabet. You'd learn how to be literate by reading a lot, by journaling and writing down your thoughts. And the same is probably true for AI literacy, that to remain literate and to stay at the front end of your game, you have to keep being conversant with the technology and I'd encourage people to carry on experimenting.

Fred Miskawi

And for me, it would be when it comes to the technology, anything that you're learning as it relates to the technology, understanding context, understanding intent will get you where you need to go with the technology.

If there's one thing to take away from today, it's that AI literacy isn't a checklist. It's a mindset. It's the daily habit of knowing when to lean into AI and when to challenge it. The world isn't going to slow down for us to catch up.

So, the question is, will you build the skills, the habits, and the confidence to stay ahead? Or will you wait until the gap is too wide to close? That choice is yours. Thank you everyone for joining us today on this latest edition of From AI to ROI.