Artificial Intelligence is now thought of as an unstoppable force that will revolutionise industries, optimise decision-making, and unlock unprecedented efficiencies. But for senior data leaders, the reality is more nuanced. The real power of AI doesn’t lie in the technology itself, but in the way that people use it.
This is the paradox at the heart of our recent webinar, The AI Paradox: Human Intelligence Behind AI Success, where industry experts explored the interplay between AI, data literacy, and human oversight.
Featuring insights from Fay Churchill (Head of Data Science at ITV), Mike Le Galloudec (AI Innovation Lead at The Oakland Group), Nic Weatherill (Head of Innovation and AI at Data Literacy Academy), and Katy Gooblar (Director of Education and Data at Data Literacy Academy), the discussion surfaced a critical truth: AI is only as powerful as the intelligence that surrounds it.
For AI to succeed, data leaders must ensure their organisations are data-literate, accountable, and strategically aligned. Let's dig into the most important insights from this conversation.
Data literacy is the foundation of AI success
As AI becomes more embedded in daily business processes, the need for data literacy is more pressing than ever. AI is not a magic wand, it is a cognitive amplifier. If employees don’t understand how data is collected, processed, and interpreted, they will struggle to assess AI-generated outputs critically.
“AI allows us to amplify how we solve business problems,” explained Nic Weatherill. “But if users blindly trust AI outputs, they won’t think critically about their decisions. That’s where data literacy is essential. It enables better, more informed use of AI and prevents blind trust in machine-generated results.”
At ITV, Fay Churchill emphasised how her team integrates data literacy into the business, ensuring that AI is understood at all levels. “We embed ourselves in the business, sitting physically next to our partners and stakeholders while they use our tools. And we don’t just talk to our chiefs and directors, it’s equally important to educate operational teams.”
Next, she shares a powerful example. A marketing colleague approached her team and said, “I’ve loved working with your data science team, and I see how powerful AI is in marketing. How do I learn more?” That curiosity is a direct result of making AI accessible and tangible to non-technical teams.
For AI adoption to succeed, data leaders must create an environment where people feel empowered to engage with data and AI and not just consume its outputs passively.
The need for human oversight and knowing when to step in
One of the most significant challenges in AI adoption is knowing where to draw the line between automation and human oversight. As AI takes on more decision-making roles, organisations must determine when and where human intervention is necessary.
Nic Weatherill put it bluntly:
"Just because AI can do something doesn’t mean it should."
He highlighted that AI lacks contextual awareness. It can identify patterns, but a pattern is not necessarily the truth. “AI can not understand all of the context that humans know,” he explained. “That’s why human oversight is critical, to input that additional context AI simply doesn’t have.”
This is especially important when mitigating bias. AI is trained on historical data, which means it inherits and amplifies existing biases. Without human intervention, these biases can lead to poor decision-making, ethical concerns, and reputational risks.
For Mike Le Galloudec, the question of human oversight boils down to one key issue: accountability. “Where does the buck stop in your organisation?” he asked. “If AI makes a decision, who is ultimately responsible for that outcome? You need an actual human to take accountability for the decision that was made.”
He pointed out that handing over critical decisions to AI without oversight is not just a technical risk, it’s a business risk. “No one is going to accept a situation where a medical diagnosis is fully outsourced to AI, and then, when something goes wrong, the response is: ‘The machine said so.’”
This means data leaders must proactively define the boundaries of AI decision-making. When should a human step in? What level of transparency is required? What are the non-negotiables when it comes to ethical AI use?
The role of human intelligence in AI oversight doesn't stop at mitigating risk after launch, it's also there to ensure strategic, ethical and responsible deployment.
AI’s biggest opportunity:
Unlocking new value, not just automating tasks
Despite the risks, AI presents immense opportunities. However organisations that approach it strategically will benefit the most.
For one, AI’s ability to unlock qualitative insights from unstructured data at scale is an immediate efficiency gain. “Businesses are sitting on mountains of untapped information, customer feedback, emails, meeting notes,” said Nic. “AI allows us to extract meaning from this data in ways that were previously impossible.”
Fay added that AI is enabling hyper-personalisation at ITV. “We can now optimise experiences at an individual level, tailoring content recommendations in ways we never could before.”
But beyond insights, AI’s true power lies in augmenting human capabilities. The best use is AI helping people do their jobs better, not replace them.
Mike pointed out that AI is bridging knowledge gaps within organisations. “In industries where expertise is locked away in silos, AI can democratise access to information, making it easier for non-experts to make informed decisions.”
However, he warned against blindly rushing into AI adoption. “You need to show, not just tell. Build a small proof-of-concept project, demonstrate its value, and get buy-in from the team.”
This reflects a broader truth that we will keep iterating until data leaders truly embrace this in their day-to-day. AI adoption isn’t about technology alone, it always comes down to people, process, and culture.
The key step to AI success:
Align it to ROI
To close the discussion, the panelists agreed that one thing will remain key: Ensure every AI initiative is aligned to measurable ROI. This final point is critical. AI should never be adopted for its own sake, it must be tied to business outcomes, strategic goals, and real-world impact.
For senior data leaders, the challenge is clear: AI isn’t just about models and algorithms, it’s about people, trust, and strategic implementation. And by embedding data literacy, human oversight, and ROI-driven decision-making, companies can turn AI into a true competitive advantage.
The AI paradox is real: The more we integrate AI into business, the more we need human intelligence to guide it.
As a data leader, your responsibility isn’t just to adopt AI, it’s to ensure it is used ethically, effectively, and strategically.
Introduction
Katy Gooblar: Hello and welcome to The AI Paradox, a webinar delivered by Data Literacy Academy in partnership with Oakland. Today, we’ll be discussing the role of human intelligence in AI success. Welcome, panel! Let's get started.
I’m Katy Gooblar, Director of Education and Data at Data Literacy Academy. I have a keen interest in accessibility and literacy in all things data and AI. I’m passionate about knowledge sharing and societal readiness for the future workforce. Previously, I held senior data roles at the Royal College of Nursing, the public sector, and the legal sector.
I’m joined today by an incredible panel: Nic, Mike, and Fay. Let’s start with introductions. Nic, would you like to go first?
Panel Introductions
Nic Weatherill: As Katy mentioned, I’m Nic, and I run Innovation and AI at Data Literacy Academy. My background is quite varied, mostly in commercial roles, but I’m a huge AI enthusiast. The launch of ChatGPT was a big moment for me. It really got me deeply engaged in this space. With my commercial experience, I focus on applying AI and technology to solve real business problems.
Katy Gooblar: Fantastic. Mike, you’re next.
Mike Le Galloudec: Hi, everyone! I’m Mike LG, Principal Developer Advocate at The Oakland Group, a data consultancy. However, I’m here today in my role as AI Innovation Lead. I pilot various programs within Oakland to drive the use of generative AI and machine learning-driven decision-making processes for our clients.
I’ve been working in the data space for nearly a decade. Before that, I taught secondary school physics to students aged 11 to 18. But today, it’s all about data—and of course, AI.
Katy Gooblar: It’s all interconnected, isn’t it? Fay, over to you.
Fay Churchill: Thanks, Katy. Hi, everyone! I’m Fay, Head of Data Science at ITV, one of the UK’s main public service broadcasters. My team provides machine learning products to our marketing, content, and product teams—essentially across the entire company.
I’ve been in data science for a few years now, but my background is in academic research, specifically in developmental psychology. My focus is on using behavioral science and AI to predict human behaviour—which, as we all know, is quite a challenge!
Setting the stage
Katy Gooblar: Thank you all for your introductions. We encourage audience questions—please submit them in the chat. We’ll address them toward the end of the session, and if we run out of time, we’ll follow up afterward. Thanks to Sarah, who’s coordinating everything behind the scenes.
AI is powerful, we all know that. But without human intelligence and data literacy, it fails to deliver real business impact. Today, we’ll focus on practical, commercially relevant insights for AI success.
So, let’s get started.
Why Is AI important in business today?
Katy Gooblar: AI has seamlessly integrated into our lives. Why is it so crucial for business today? Fay, let’s start with you, as AI has been transforming your industry for quite some time.
Fay Churchill: It’s hugely important. The pace of change, particularly with generative AI, is astonishing. Regardless of your industry, to stay competitive, AI has to be in the mix.
For us at ITV, AI enhances the viewer experience by personalising content, optimising marketing, and enabling data-driven decision-making. While AI makes these processes more powerful, it’s also about being innovative. We don’t just want to keep up with competitors like Netflix and Amazon—, we want to get ahead.
That being said, it’s worth noting that around 80% of our data science work is still focused on traditional machine learning. We’re just starting to explore generative AI, so while it’s exciting, the foundational ML models remain invaluable.
Katy Gooblar: Mike, you work across different industries. What would you add?
Mike Le Galloudec: It’s worth stepping back to understand what AI is actually good for. As Fay mentioned, there are two major categories: traditional machine learning—which was cutting-edge just a few years ago—and the newer generative AI.
Traditional ML models are well-established, reliable, and essential for decision-making. Generative AI, on the other hand, excels at digesting large volumes of textual information, making it useful for businesses drowning in documentation.
Companies have vast amounts of unstructured data: text files, reports and meeting notes. But often they struggle to access it. Large language models (LLMs) can transform this data into valuable insights, making AI indispensable.
Katy Gooblar: That aligns with what we’re seeing at Data Literacy Academy. AI, particularly generative AI, has transformed how we access and interact with data. Nic, how are we implementing this internally?
Nic Weatherill: AI is revolutionising how we engage with data. Instead of manually sifting through reports, we use AI to summarise and extract relevant insights. This allows us to work more efficiently and make data-driven decisions faster.
For example, at Data Literacy Academy, we’ve fundamentally changed how we access company documents. Instead of reading lengthy PDFs, we can query AI models to summarise, analyse, or extract specific information. This enhances productivity and decision-making across the company.
Katy Gooblar: So, AI enhances personalisation, competitive advantage, and efficiency. Let’s take this further: What does data literacy have to do with AI success?
The role of Data Literacy in AI success
Katy Gooblar: We’ve talked about AI’s importance, but what does data literacy have to do with it? What does it really mean to enable AI? Nic, let’s continue from your point about quality and take this further.
Nic Weatherill: Sure. One of the key aspects of working effectively with AI is understanding that data literacy isn’t just a technical problem—it’s a cultural one. It starts at the cognitive level. AI is a cognitive amplifier, helping us solve problems, but if people don’t understand how data is collected, processed, and represented, they can’t critically assess AI-generated outputs.
At Data Literacy Academy, we teach learners to question, interpret, and contextualise information. AI allows us to amplify how we solve business problems, but if users blindly trust AI outputs, they won’t think critically about their decisions. That’s where data literacy is essential—it enables better, more informed use of AI and prevents blind trust in machine-generated results.
Katy Gooblar: Absolutely. And Fay, how does this manifest in your business at ITV?
Fay Churchill: We see it as a key responsibility of our data science team. We aren’t a backend team; we embed ourselves in the business, sitting physically next to our partners and stakeholders while they use our tools. This hands-on approach ensures they understand the models they interact with. We also take a bottom-up approach—yes, we talk to our chiefs and directors, but it’s just as important to educate operational teams.
A great example: Recently, someone from our marketing team came to me and said, “I’ve loved working with your data science team, and I see how powerful AI is in marketing—how do I learn more?” That was a huge moment because it showed the impact of our work. Now, we’ve set up a mentoring partnership between them and one of our data scientists so they can deepen their understanding. This kind of curiosity is where the real opportunities come from.
Katy Gooblar: That curiosity and behaviour change is exactly what we want. It’s not about fear or resistance but about excitement and engagement. Mike, given your strategic work at Oakland, how do you see data literacy driving AI success?
Mike Le Galloudec: It’s critical. If you’re bringing AI into an organisation, you need to give people decisions they can believe in. AI brings users closer to data than ever before, which means they’ll start asking more questions about it. That’s great, but it also means your users need to understand where data comes from, why it’s in a certain format, and whether it’s correct.
A major risk in AI adoption is that AI doesn’t “know” what is correct or incorrect, it only knows what’s in the data. If your data isn’t great, AI won’t magically fix it. The more accessible AI becomes, the more organisations must ensure their teams understand data lineage, transparency, and business rules. Without that understanding, decisions become less trustworthy.
Overcoming AI adoption challenges
Katy Gooblar: What are the biggest obstacles to AI adoption, and how do we mitigate risks? Mike, let’s start with you.
Mike Le Galloudec: Transparency. AI models, especially large language models, can generate very convincing nonsense. If users can’t see how an AI arrived at an answer, they won’t trust it. In our AI projects, we build in transparency features like citations and intermediary steps so users can see the decision-making process. Without this, AI adoption becomes much harder.
Fay Churchill: I completely agree. Another big risk is people misunderstanding how AI works. For example, some colleagues didn’t realise that AI-generated responses are probabilistic—if you put in the same prompt twice, you might get different results. That was a revelation for them. We need to make sure people understand these basic AI principles, so they don’t become disillusioned when things don’t work exactly as expected.
Nic Weatherill: I’d add that one of the biggest risks is being too slow. Employees are already using AI in their personal lives. If organisations don’t provide a safe, controlled space for AI experimentation, people will turn to their own accounts and introduce security risks. Companies need to act now to provide structured AI environments that encourage learning while maintaining security.
The opportunities AI brings
Katy Gooblar: Let’s shift to the exciting stuff. What opportunities does AI bring? Nic, let’s start with you.
Nic Weatherill: One of the biggest opportunities is unlocking value from qualitative data. Businesses sit on mountains of unstructured data, customer feedback, emails, meeting notes, and AI allows us to extract insights from it in ways that were previously impossible. We can now analyse patterns across vast amounts of text data, which is transforming decision-making.
But beyond insights, AI is enabling creativity. We’re seeing AI assist in writing, design, and even coding, augmenting human potential rather than replacing it. The key is knowing how to guide and refine AI-generated work, which is why data literacy is so important. Understanding how to validate AI’s outputs is what makes the difference between using it effectively and being misled.
Fay Churchill: Yes! AI is helping us connect and categorise huge content libraries at ITV, making recommendations more effective. It’s enabling things that would be physically impossible for humans to do manually. But it’s not just about scale, it’s about personalisation. AI allows us to tailor content to audiences in ways that were unimaginable just a few years ago. We can optimise experiences at an individual level, which is incredibly powerful.
Mike Le Galloudec: And let’s not forget automation. AI is taking over routine, repetitive tasks, allowing people to focus on strategic, high-value work. That’s a huge win. But even more exciting is AI’s ability to bridge expertise gaps. In industries where knowledge is locked away in silos, AI can help democratise access to information, making it easier for non-experts to make informed decisions.
Katy Gooblar: It’s such a dynamic time. AI isn’t just a technology shift, it’s a business and cultural shift. And that’s why data literacy is non-negotiable. Without it, companies won’t be able to fully leverage these opportunities.
The Role of Human Oversight in AI Decision-Making
Katy Gooblar: We've talked about the opportunities AI unlocks, but what kind of oversight do we need to ensure it reaches its full potential? What role does human oversight play in this process? Nic, let’s start with you since you were just discussing this.
Nic Weatherill: It's a great question. The key is recognising when we need a human in the loop. Just because AI can operate autonomously doesn’t mean it always should. There are critical moments where human intervention is necessary, whether to mitigate bias, ensure ethical decision-making, or provide contextual understanding.
AI, at its core, is a pattern recognition machine. But recognising patterns doesn’t necessarily mean recognizing truth. There’s often much more complexity involved, and that’s where human oversight becomes invaluable.
Context is another major factor. AI doesn’t inherently understand the broader context behind decisions, goals, or business operations the way humans do. When I teach our internal teams how to work with large language models, I always emphasise that they shouldn’t just input a request, they need to provide AI with context. What is the task? What is the business need? Who is the audience? That additional context improves the output significantly.
Another crucial aspect of oversight is ensuring AI is being used for the right reasons. Are we solving the right problems? Are users leveraging AI in a way that aligns with our business strategy? It’s not just about whether AI can do something, it’s about whether it should.
And of course, the challenge is that as AI evolves, so does the role of human oversight. The points at which we need to intervene are constantly shifting, which means businesses must remain adaptable and continually reassess how they integrate AI responsibly.
Katy Gooblar: Exactly. Just because we can, doesn’t always mean we should. Fay, Mike, any thoughts on this?
Fay Churchill: Yes, a few quick points. First, not all businesses and industries are the same, so the level of oversight required varies depending on the risk involved. If you’re in finance or healthcare, the stakes are very different from, say, my team’s work in media. So businesses need to take responsibility for assessing AI’s risks within their specific context.
Second, the concept of "human in the loop" is not just about AI—,it’s about people. If we move too fast toward automation without bringing employees along for the journey, we create fear and resistance. The best way to mitigate that is through engagement. Involve people, educate them, and implement changes at a pace that makes sense for your organisation.
Mike Le Galloudec: For me, it comes down to accountability. In any organisation, you have to ask: Where does the responsibility ultimately lie? You can delegate tasks to AI, but when a critical decision is made, who is accountable for the outcome?
For example, no one would accept a situation where a medical diagnosis is fully outsourced to AI, and then, if something goes wrong, the response is simply, "The machine said so." That’s unacceptable. There needs to be a human who takes accountability for the decision.
So, businesses need to define their approach to AI oversight both from a regulatory standpoint and from an internal governance perspective. Where do you draw the line between automation and human judgment? Getting that balance right is crucial.
Quick-fire round: Key steps for AI success
Katy Gooblar: Let’s wrap up with a lightning round. What’s the one key step you take to ensure AI success? Quick answers, max five seconds each! Fay, you first.
Fay Churchill: Don’t believe the hype! Ignore the buzzwords on LinkedIn, move at your own pace, as long as you’re making progress.
Mike Le Galloudec: Show, don’t just tell. Build an MVP or a small proof-of-concept project, demonstrate its value, and get buy-in from the team.
Nic Weatherill: Align everything to ROI. Every AI initiative should be tied directly to measurable business value.
Audience Q&A
Katy Gooblar: Great answers! Now, let’s tackle a few audience questions before we wrap up. If we don’t get to all of them, we’ll follow up later.
First question: How do you balance the environmental impact of AI with its business benefits? Are there ways to make AI adoption more sustainable?
Mike Le Galloudec: Good question. The good news is that environmental sustainability aligns with cost efficiency. Reducing cloud costs, latency, and computational cycles directly reduces environmental impact. There are many technical ways to optimise AI, like choosing smaller models, caching responses, and using model distillation techniques where a smaller AI learns from a larger one. These not only make AI systems greener but also improve performance for end users.
Katy Gooblar: Thanks, Mike. Next question is for Fay, how is your ITV data science team structured? Do you control the data platform, or do you work with other internal teams? And once a model is developed, how do you ensure it’s effectively embedded into the organisation?
Fay Churchill: At ITV, our data science team sits within the broader insights group, alongside our data analysts and market researchers. Our engineers and data platform team sit under our Chief Data Officer within the tech division. We don’t own the platform, but we work closely with the engineering team.
As for embedding models, we follow an iterative approach, A/B testing, measuring value continuously, and refining over time. We don’t just deploy a model and walk away. We ensure adoption by keeping the feedback loop open and iterating based on real-world use.
Katy Gooblar: Final question: With AI automating more business processes, should prompt engineering be specialised for different roles? Do we need tailored training for different teams?
Nic Weatherill: Yes, absolutely. At Data Literacy Academy, we provide structured documentation for our internal teams, outlining key context about our company, departments, and objectives. This helps people write better prompts tailored to their specific use cases.
One tip: Before you start prompting, use a separate AI session to refine your prompt. I always start by explaining to the AI: "I’m trying to achieve X, with this context and this objective—help me craft an effective prompt." That way, I get a stronger, more targeted prompt before I even begin the actual task.
Closing Remarks
Katy Gooblar:
That’s all we have time for! Thank you so much to our panel, this has been an incredible discussion.
And to our audience, thank you for your time and fantastic questions. If we didn’t get to yours, we’ll follow up with written responses on our website.
Have a great afternoon, and see you soon!
Unlock the power of your data
Speak with us to learn how you can embed org-wide data literacy today.