Cracking the code to successful AI adoption
There’s no shortage of hype around Artificial Intelligence. But while headlines focus on the next big generative model or shiny AI startup, the real work is happening behind the scenes. Data leaders are grappling with how to implement AI in a way that actually delivers business value.
In this LinkedIn Live hosted by Kyle Winterbottom of Orbition Group, a panel of seasoned data leaders shared hard-earned lessons from the frontlines of AI adoption.
Joining him were:
- Alex Sidgreaves, Chief Data Officer at Zurich Insurance
- Indhira Mani, Chief Data Officer at RSA UK
- Greg Freeman, CEO & Founder of Data Literacy Academy
Together, they covered everything from aligning AI with strategy to navigating the murky waters of responsible AI.
Here’s what stood out.
Why strategy always beats shiny tools
One of the biggest blockers to AI adoption? Starting with the tech instead of the business problem.
Greg Freeman made a sharp observation: organisations often dive into AI without a clear understanding of how it supports their strategic goals. “You can’t just point to a corporate strategy written in 2019 and expect it to be relevant now,” he said. “Leaders’ priorities change. Teams shift. The only way to know what really matters is to have actual conversations.”
His advice? Forget the theoretical alignment exercises. Sit down with decision-makers, ask what’s keeping them up at night, and find ways for AI to support those outcomes.
The foundations are (still) non-negotiable
Talk to any CDO for more than five minutes and the word “foundations” will come up. Indhira was quick to stress the basics: without trusted, high-quality data, AI just won’t deliver. “It’s garbage in, garbage out,” she said bluntly.
But there’s a twist.
Greg warned against letting foundational work become the whole story. “You can spend three years building perfect foundations, and still lose your job if you don’t deliver value along the way.” His solution? A ‘lighthouse’ strategy: balance the long-term groundwork with early wins that demonstrate clear, visible impact.
It’s about showing progress without sacrificing integrity.
What does ‘adoption’ really mean?
Spoiler: it’s not logging into a dashboard once a week.
Greg put it simply: “Adoption isn’t a metric. It’s a mindset.” Just because someone uses an AI tool doesn’t mean they trust it or that it’s changing how they work. Real adoption happens when people adjust their behaviour and when AI becomes embedded in day-to-day decisions.
That mindset shift is harder to measure, and harder to achieve. But it’s the difference between a pilot that fizzles out and an AI capability that scales.
Fastest wins? Start where the pain is
According to Alex, if you want quick adoption, start by removing tasks people hate. “Anywhere you can take away weeks of manual work, you’ll see AI embraced fast,” he said. One example she gave? Reducing underwriter case review time from seven hours to just a few minutes. That’s not just a win, it’s a game-changer.
Indhira echoed this, noting that pricing, claims, and risk modelling are ideal AI candidates in insurance because of the sheer volume of data involved. “AI isn’t just useful, it’s transformative in these areas,” she said.
And where adoption lags? Usually in parts of the business where AI is bolted on as an afterthought, or where it's not integrated with tools people already use.
Beware the “one-size-fits-all” trap
Alex shared a cautionary tale: a company rolled out Microsoft Copilot across their entire organisation… and six months later, only 20% of people were using it.
Her point? “AI tools need to solve specific problems. If they’re generic, people won’t engage.” Blanket rollouts rarely stick. Targeted solutions, integrated into existing workflows, work better every time.
Education isn’t optional
All three panellists agreed: AI literacy is just as critical as data literacy, and just as misunderstood.
At Zurich, Alex set up a Data Science Partner Programme that educates people across the business on what’s possible. “We’re not trying to turn everyone into a data scientist,” she said, “but we want them to spot opportunities and bring them to us.”
Greg added that it’s not just about tool training. “You’ve got to teach people to challenge the output. AI makes mistakes. It hallucinates. If someone copies the wrong company number into an RFP because ChatGPT told them to, that’s not a tech failure, that’s a literacy failure.”
Responsible AI: Not just for the ethics committee
Indhira didn’t hold back here. “AI can’t be a side-of-the-desk activity. You need proper frameworks, governance, and process.” At RSA, she’s ensuring data classification and platform governance are in place so sensitive data isn’t accidentally exposed or misused.
Alex added that at Zurich, they’ve appointed someone whose full-time job is managing responsible AI. “It’s not about saying no,” she explained. “It’s about helping the business understand risk, so they can make informed choices.”
And Greg? He brought it back to the people: “If your team doesn’t value data, no responsible AI framework will save you. The ethics conversation starts with culture.”
You need a shared language for value
One question that resonated with the whole panel was about estimating the value of AI.
Greg outlined a simple, three-step approach:
- Value hunting – find the problem worth solving
- Value forecasting – agree the potential ROI with business and finance
- Value realisation – track the outcome
“The forecasting stage is where most people go wrong,” he said. “If you don’t pressure-test the value assumptions with finance, don’t be surprised when you miss the mark.”
So, where are the sweet spots for AI adoption?
- Operations: where repetitive tasks drain time and energy
- Digital marketing: data-rich and often already tech-savvy
- Risk and pricing: where decisions hinge on crunching large datasets
- Customer journeys: when AI shortens response times or improves accuracy
- Anywhere people are drowning in spreadsheets
As Greg noted, “Digital teams are often digital-first and data-first. They’re ready.”
AI Is a people problem, not just a tech one
The session closed on a theme that ran throughout the discussion: successful AI adoption is more human than it is technical.
You can have the best models, the cleanest data, and the sharpest tools, but if the people in your organisation aren’t bought in, it won’t stick.
As Kyle summarised, “Culture eats AI for breakfast.”
And from the looks of it, it always will.
Follow Orbition Group for more insights from senior data leaders, or explore Data Literacy Academy to learn how to scale literacy and culture alongside your AI initiatives.
Kyle: Hello and welcome to another LinkedIn Live brought to you by Orbition Group. I'm Kyle Winterbottom, your host, and today I'm delighted to be joined by three, maybe four great panelists as we discuss successful AI adoption. As always, we want to make this as engaging as possible, so please get your questions in the comment section on LinkedIn. We'll try to answer as many as we can throughout the session. Without further ado, I'll bring our panelists up and let them introduce themselves.
But before we start, let's do a quick round of intros and then jump into the meat of today's topic. Alex, you're at my top right, so kick us off if you would.
Alex: Sure, I'm Alex Sidgreaves, Chief Data Officer at Zurich Insurance.
Kyle: Nice, thank you very much indeed.
Indhira: Hi, I'm Indy, Chief Data Officer at RSA UK International.
Greg: Very good. Hi, I'm Greg Freeman, CEO and founder of Data Literacy Academy. We help large financial services organisations, among others, roll out data literacy and culture programs across their wider organisations.
Kyle: Nice, cool. I've got a raft of questions lined up that I'm keen to ask you all. I'm sure the audience will have plenty of questions as we go through the session today, so please keep your questions coming.
Obviously, successful AI adoption is all the rage now. There's so much talk about generative AI, agent AI, and many exciting developments. One key topic we want to tackle early is the age-old challenge of aligning any initiative, especially an AI initiative, with a business goal or strategy. Greg, I'll come to you first because you, like me, probably have a slightly broader experience working with various businesses. Do you have any hints or tips around best practices for aligning AI initiatives with business strategy?
Greg: Yeah, I think the first step—and this is always the topic of choice—is making AI valuable to the organisation. That's why Data Literacy Academy exists. A few years ago, I noticed clearly that while many people were working with data, few realised or understood its value. The same is happening and is likely to continue happening with AI. The gap between expectation and reality is still quite large, leaving business leaders dissatisfied because it's expensive.
The only effective approach is ensuring it's understood on the business side by aligning it with things they understand. Typically, business and commercial leaders—like you and me—aren't particularly interested in data or AI as standalone topics. They care about how it supports their goals and objectives.
The simplest tip I can offer is just to ask them directly. The strategy posted on websites, often from pre-COVID times like 2019, typically doesn't reflect current priorities. Things change rapidly in large organisations, teams rotate, and priorities shift. The best approach—similar to the webinar you did with AX, which I found insightful—is to sit down and have direct conversations with leaders. Probe their brains to understand their immediate challenges, not just operational pains, but the strategic issues that genuinely move the needle on their scorecards. Addressing these strategic issues is beneficial and will help win hearts and minds. Don't just rely on publicly available corporate strategies; have real conversations and make it all about their current priorities.
Kyle: Nice. Indy, Alex—we've got opposite ends of the spectrum here. Alex, I think you've been at Zurich for around 20 years, give or take. Indy, you've been at RSA for a year. You're both inside large, complex organisations where AI adoption is undoubtedly a topic of interest. Alex, you've seen this evolution over a long tenure, whereas Indy, you're entering an organisation as this wave of AI adoption is accelerating. Indy, I'll come to you first. What's your approach to successfully integrating AI into the business, aligning it with the strategy and goals?
Indhira: I think I'm in a very fortunate position, and I've said this to you, Kyle, even before RSA. We are part of Intact Financial Corporation, who are the biggest insurance player in Canada. They're also worldwide now with acquiring RSA and other parts in Europe. If you look at how they've become one of the global leaders and the most important player in the Canadian market, it's by leveraging data and AI for risk selection and pricing. Over the years, they've made significant investments in data and AI, and now we're deploying third-generation AI and ML models. That's how big it is—we have 600-plus people in our data and AI lab from the Intact group.
I think I'm in a very fortunate position because at RSA, we understand data is a key enabler to unlocking value strategically. It's not just the big corporate roadmap, but also our goals and plans to achieve our path to green. Data rightly underpins every strategic objective and business outcome outlined. We work coherently with all parts of technology and business transformation offices to ensure we embed solid foundations of data and AI, so we're successful together. We're always outcome-based but have precedent to understand how and why we'll succeed. That's why I think I'm in a fortunate situation. This might be new for me, but it's not new for RSA or Intact Group. Underpinning business outcomes, as everyone agrees, is key, but having precedent makes it more viable and less challenging. Businesses understand because they've seen multi-year incremental benefits—we have around 80 million-plus in benefits from AI and ML adoption.
Kyle: Oh, nice. Interesting. Well, that'll help with adoption—show an 80 million pound benefit, and you'll be adopted without doubt. Alex, keen to get your thoughts here, and then we'll jump into Lynn's excellent question.
Alex: A couple of things I'd highlight where adoption has been successful are cross-functional teams. The business shouldn't be an end result you deliver to; they should be involved throughout the build process, understanding what's being developed. This approach allows clarity on measuring impacts and ensures success, building essential trust for adoption. Also crucial is not adopting a one-size-fits-all tool approach. If you want successful adoption and true value for the business, don't roll out a single tool across the entire organisation. I heard a good example where someone rolled out Copilot across an entire organisation, and after six months, only 20% adopted it. The more specific you are about tools, clearly defining what problems they're solving, and integrating them into business processes, the more successful your adoption will be.
Kyle: Interesting. Lynn asked an excellent question we probably should have tackled at the outset: What do we mean by adoption? Greg, what's your take?
Greg: That's a great question. I think what we often consider adoption are actually proxies for true adoption rather than reality. We're all looking for measurable outcomes, but decision-making and behaviours, which happen in real-time, are the hardest things to track locally. Adoption is truly a mindset and set of behaviors that lead a person to change their operation methods. It's easy to think adoption is simply about how many people log into your dashboard or use your AI tool, but these are just proxies. Are they using it correctly? Are they making the right decisions? These aspects are challenging to track.
We have someone whose full-time job is creating and training models to achieve specific business goals. He isn't a data scientist but is passionate and valuable because everyone in our business prioritises data and AI. Whether it's RFP responses or marketing posts, data and AI underpin their actions. When the business mindset shifts, dedicating resources to AI becomes valuable. Without that mindset, it’s less beneficial. So, Lynn, adoption isn't just usage—it's a mindset and behaviour shift, embedding it culturally into work practices. True adoption is harder to measure, hence more frustrating.
Greg: Fun facts about Jonathan—he's probably the person I've known longest outside of my direct family. He's just thrown a question my way that's relevant to everyone, I think. To me, this is a data literacy issue. Data literacy and AI literacy shouldn't be too greatly distinguished because if you don't have data literacy, you're unlikely to jump straight into AI literacy. If people believe either data or AI alone will solve everything, they simply don't understand data or AI. I think that's something we can all agree on.
Much of the work we do with leadership—who we don't necessarily expect to fully understand data and AI or even like it—is about outcomes. Leaders understand people. Any good leader, anyone who's reached a certain level in their industry, has likely been managing people for the past 20 years. They will have observed their teams go through multiple transformations—digital, data, and now AI.
Jonathan's point likely revolves around data quality. There was also another excellent question earlier regarding what constitutes good data for AI models. The straightforward answer is data you trust. More importantly, it's data the audience trusts, a point often overlooked. Even if data can be proven statistically valid, if your audience trusts their spreadsheets more, you won't get anywhere.
It's essential to communicate to leaders that their people are integral to the data quality problem. Leaders don't trust the data because their teams generate poor-quality data daily. We all know that. Emphasising that neither AI, data, nor technology alone is a silver bullet, and highlighting that many challenges are people-driven, resonates with leaders. They understand people and their inherent challenges.
Kyle: Great. Indhira, turning to you—as mentioned earlier, you're in the thick of it after 12 months. I'm sure you're fielding all kinds of questions typical for data leaders and CDOs, such as "Can we apply AI to this?" How do you help your business stakeholders understand the necessity of strong data foundations for effective AI?
Indhira: Definitely. I always stress getting your foundations right. I've been fortunate because we recognise data as both the input and output for any AI model or AI-related initiative. Without trusted, secure data, it's simply garbage in, garbage out. It's a straightforward concept: you get out exactly what you put in.
A model is only as good as it is until deployment; after deployment, it becomes obsolete quickly. It's about education, awareness, and closely involving stakeholders in the journey. My challenges revolve more around accelerating and optimising the process rather than justifying why we're doing it. I'm fortunate to operate in an environment where the focus is on responsible yet accelerated growth. AI isn't a magic wand granting every wish. It's about solid foundations, education, and prioritizing effectively—that's how we win hearts and minds.
Kyle: Excellent. Alex, any thoughts on communicating this challenge effectively?
Alex: Business fundamentally revolves around risk. Actuarial models have existed for a very long time and depend entirely on good data. I've always had straightforward conversations with stakeholders: without good data management, these models can't work. A positive outcome from generative AI is that it emphasised the importance of solid data management practices—typically a tough, boring subject no one wants to discuss. Fortunately for me, framing it in terms of risk makes it easier to communicate.
Kyle: Great point. Indy, earlier you emphasised getting foundations right. Aaron asked a relevant question about enabling leaders to drive an effective agenda around foundational data practices. Any insights on how?
Indhira: I'll emphasise again—bring leaders along from the very start, not after embarking on the journey. Data and AI shouldn't stand alone; they should integrate directly with business outcomes. Our roadmap shouldn't be separate—it must be embedded within the business roadmap. That alignment is how we effectively enable leaders to achieve their business objectives.
I always remind my leaders that data won't inherently take profits from 10% to 15% or significantly improve risk modelling precision alone. Instead, data securely supports and enables reaching predefined business goals. Data is an enabler, not the driver itself—that's the agenda we're setting.
Kyle: Yeah, great. Greg, anything to add on that?
Greg: Yeah, I think the first point I'd make—and I always find myself referencing this model because it’s probably the best description I've come across—is around the obsession with foundations. Foundations are absolutely essential, don't get me wrong, but you can spend two or three years building excellent foundations only to find yourself out of work and out of luck because you've not delivered value in the meantime.
The data industry is extremely expensive. Three years ago, it was already a $272 billion industry, and it's probably grown significantly since then. Oakland, recently acquired by Softcat, emphasises the importance of foundations but also stresses the significance of 'lighthouses'. The idea is that lighthouses—visible, impactful deliverables—need to be delivered alongside foundational work. Yes, foundational work is necessary and should be consistent throughout your strategy, timeline, and roadmap. However, if you fail to deliver these impactful milestones or early wins, you won't effectively drive your agenda. Instead, you'll be perceived as an expensive, boring cost centre focused only on foundational tasks without demonstrating tangible value. I believe that's a critical point.
Kyle: Hmm, yeah. Alex, anything to add to that?
Alex: I'm not sure I could put it better than Greg. I completely agree—you must adopt a dual-speed approach. Foundations are vital; no data professional would disagree. However, a business person doesn't necessarily want to hear exclusively about foundations.
Kyle: Yeah, 100%. That ties directly into my next question. Obviously, there are trade-offs between foundation building and pursuing quick wins. Often, these quick wins might not align perfectly with the broader strategic direction. They tick boxes but might not significantly advance the journey from A to B, which happens more often than people realise. So, Alex, how do you strike the right balance between long-term foundational work and delivering quick results that gain business buy-in?
Alex: It’s essential to accept that quick wins don't necessarily have to build directly towards your overarching strategy, and that's fine—as long as not everything is a quick win. If every action is just ticking a box, you’re not truly progressing. Balance is key. Even after completing foundational work, you’ll still have a long-term strategy, especially with AI, which takes time. Quick wins might focus more on immediate efficiencies or removing tasks that people dislike, helping get buy-in for AI adoption.
Another valuable approach is incremental development, similar to the agile methodology. Deliver small, targeted solutions and then continuously build upon them. Over 6 to 12 months, these incremental updates can cumulatively deliver significant value. This approach means you avoid constant point solutions and instead build reusable, scalable components.
Kyle: Yeah, exactly. That's what I was referring to—the importance of reusability and avoiding repeated point solutions that don't scale.
Kyle: Right, questions are coming thick and fast, and we're tight on time. John asks, 'How are you managing the educational aspect regarding various AI tools and their appropriate applications? There's significant value in using AI for efficiency and making unstructured data accessible, but a clear distinction exists between what AI tools could solve and what they should solve.' Great question. Greg, given your background, any insights here?
Greg: Absolutely. It’s a great question, and it builds nicely on Alex's points. AI tools should target significant, strategic, high-value problems. However, AI also excels at resolving daily headaches and operational inefficiencies. Honestly, generative AI is an opportunity for data, AI, and IT teams to gain significant time back by solving straightforward problems quickly, allowing them to address bigger, more complex challenges in the background.
Businesses missing the opportunity to deploy generative AI in practical applications are missing out hugely. Effective, rapid value, backed by a robust long-term strategy, is the optimal approach. It’s critical to allocate sufficient resources and time to solving foundational, strategic problems while enabling day-to-day AI use for immediate benefits. Even though some argue about whether something like ChatGPT is 'real AI', it's practical AI that we engage with daily. The key is balancing immediate practical solutions with deep foundational investments.
Kyle: Interesting. Indi, any thoughts about balancing what you could do versus what you should do?
Indhira: Education in this area must be comprehensive, covering both AI and general data literacy. We're trying to clearly differentiate between what’s necessary and what's possible. As Greg mentioned, if organisations are already effectively utilising these tools, that's excellent. However, where gaps exist, we’re addressing them through initiatives like launching an academy and targeted training. Importantly, aligning personal interests with business objectives helps ensure meaningful engagement and successful implementation.
Kyle: Yeah, absolutely. Alex, anything to add?
Alex: Yeah, we established something called a Data Science Partner Programme that's worked really well. Now, this isn't about trying to create an entire workforce of data scientists—that would cause me nightmares I don't even want to think about—but it's more about taking people who are interested through an educational process on what's available and what kinds of things it can do. The aim is to put them back into the business, enabling them to identify opportunities that they can then bring back to us, so we can work together to build solutions. That's been really successful for us.
Kyle: Yeah, that's really interesting. Obviously, there's always a kind of ethical or moral slant when it comes to the "could versus should" question. We've got a few questions aligned to that, so we'll jump into it. Indhira, I'll come to you next. Guardrails—what do you think is important when you're putting these tools into the business?
Indhira: Oh, there's loads—don't get me started! I'm sure my fellow panellists will agree: the data function often ends up acting like the police whenever there's an AI conversation, asking, "What are you saying now?" But when we talk about guardrails, it's not just about the tools or technology we implement. It's also about the processes we set up around them, the frameworks we build, and importantly, the quality of data used as inputs. I think these elements sum up the Responsible AI Framework well, outlining how organisations should consider AI.
We're working towards responsibly handling AI, firstly ensuring we stay focused on the tasks at hand and don't get distracted. Secondly, we have to constantly question whether what we're doing is ethically right. We must ensure we use artificial intelligence and related tools for their intended purposes, nothing beyond that.
Specifically with generative AI, it's crucial to clearly define your intended outcomes and work backwards—considering why you're using AI responsibly and ethically, and identifying key considerations for achieving your outcomes. Otherwise, things can quickly escalate into an uncontrolled environment.
Kyle: Makes sense. Alex, any additional thoughts on guardrails?
Alex: I'd echo the focus on Responsible AI. AI is not a side-of-the-desk activity. I have someone whose dedicated role is managing Responsible AI across the organisation. It's not about saying "no" to projects but educating stakeholders about risks. It's about understanding the ethics: Are we comfortable with it? Is it transparent? Can we explain it? Also, from a regulatory perspective—as we're a financial services organisation, we need to understand forthcoming regulations. It's our responsibility to keep abreast of developments through podcasts, government think tanks, papers, and sprints with regulatory bodies. It's absolutely about informing the business, which then decides the level of risk it wants to accept.
Kyle: Greg, from an educational perspective, have you seen effective methods in organisations for implementing governance, risk, and ethical guardrails around AI?
Greg: I'll first share a specific guardrail that's very important—no surprise, it's about people. Often, we overlook the fundamental lessons we're trying to teach about AI. For instance, generative AI, such as that used by ThoughtSpot, provides a very simple concept for end-users—like more effective Googling. But it's crucial to teach people they can't blindly trust it.
Here's an example: we use generative AI for our RFP responses. Recently, I reviewed a response from our team where our company address and company number were incorrect. I asked, "Where did you get this information?" The team member replied, "The model provided it." That's precisely when you need to challenge AI's outputs. Checking something simple like an address or company number should still involve traditional methods like Google's Companies House.
Critically thinking about AI-generated information is a crucial guardrail—not just teaching people to use ChatGPT or Copilot, but teaching them to challenge the outputs.
To your other point, Kyle, building a data stewardship and ownership programme is genuinely challenging, particularly in environments where many people don't inherently value data. Getting stewards and owners to tag data accurately at a granular level is particularly tough.
If you can get them to turn up to a community once a month, you're winning, let alone tagging data at a role level. Do you know what I mean? So, we've got to make them aware of the value, the opportunity, and the carrot of data before we achieve effective stewardship. I think it's really important not just to think—I hate looking at things in isolation.
I hate it as a topic in isolation. I think it's boring. Therefore, what we've got to do is help people understand the value of data. So they understand things like the value of being FCA compliant, which they generally do within large organisations. Once you've sold them on the carrot of data, they'll buy into it, stick with it, and they'll become effective at tagging, addressing colleagues when they see something unethical happening, etc.
So, I just think it's a different lens on it, and obviously, it's about people from our side.
Kyle: Yeah. A few good questions are still in the chat, and I'm conscious of time, so we'll try to get through as many as we can. There's a bit of a "how" question here that's slightly linked to this, asking for suggestions on how to ensure sensitive information is excluded from datasets. Alex, is there anything you can advise around this or anything you are doing that addresses this?
Alex: Probably not. I'm not allowed to say no. I mean, all our information is tagged, so anything that's sensitive will already have been tagged, and it would be excluded if it needed to be safe.
Kyle: Okay, so there you go—foundations, right? Anything else?
Indhira: It's data classification, right? That's why you have platform governance in place, right? Classify data, anonymise personal information and personally identifiable information because that's how your data product or database ML product is used for model training. This is exactly why some of the foundational elements need to be correct, even to enable AI adoption within the organisation.
Alex: Yes, though it's about knowing the data you've got. So, certain things will obviously be classified as sensitive, for example, personal history. But it's where you have sensitive information appearing unexpectedly—that's when it comes down to really understanding your data.
One example that's always a minefield is claims notes in an insurance context, because at the time of claim, anything can be recorded in claims notes. We protect that and don't bring it into data products because we can't be sure what's there. Depending on the use case, we look at it in isolation. So yes, some cases are obvious, but there are complexities.
Kyle: Yeah, that's interesting—understanding what's in the data and also having mechanisms for classification. Just top of mind, do you think more highly regulated industries like yours think about this?
Alex: Probably.
Kyle: Yeah, because we know that great data classification means better-trained models and outputs. But if there's no massive incentive or risk of significant fines or breaches, maybe businesses aren't incentivised enough to get this right.
Indhira: It's non-negotiable. I agree—it's even about ethical use of data.
Kyle: Yep, cool. Final question from Kevin: How to overcome challenges related to business estimations of value related to AI?
Greg: Yes, the framing of the question is interesting. As long as I'm reading it correctly—I'll take it from my perspective—if you allow the business to own the estimation single-handedly, it's your fault if you can't meet it. We talk about a three-stage process. It sounds like we've effectively jumped the process to value realisation and perhaps done some value hunting.
We talk about value hunting, value forecasting, and value realisation. If you've hunted for the problem, you have it, hence the estimated value on the AI product return. But the middle bit—the value forecast, where you as a data leader or professional agree on a forecasted value mutually signed off by the business stakeholder and a finance professional—is crucial.
Someone needs to genuinely test the numbers, challenging whether the value is realistic, whether created by data teams or business teams. Once mutually agreed and pressure-tested during this forecasting phase, you shouldn't face underestimation because everyone has committed and signed off.
Realising and tracking that value is another challenge, but estimation itself shouldn't be problematic because you shouldn't build products without robustly pressure-tested estimations. If I'm interpreting the question correctly, I believe that's where the gap might be. That mutual agreement phase between the three key parties is critically important.
Kyle: Yeah. I’m not sure whether the question is more generally about the expectations of what value businesses think they’ll realise by building AI products, maybe? I’m not too sure, Kevin—if you want to try and clarify that, that would be great.
There’s a really good question here. We’ve got only five, six, seven, eight minutes or so left. So, if you’ve got any more questions, please get them in so we can get them answered.
Pina has asked: Where have you seen the fastest adoption of AI internally, and what made that part of the organisation ready for it? Fantastic question. Alex, I’ll come to you first.
Alex: I think this probably brings us full circle back to where we started—adoption. Really, the fastest adoption I’ve seen is where I take away things that people hate, right? You remove tasks from their jobs. What was said earlier is absolutely right: AI is really simply fitting in to remove large amounts of manual work that people were doing.
So, anything where I’ve removed a task that might take someone weeks—literally weeks—to do manually... adoption of that is fast. And it's those targeted pieces. I think wherever you’ve got a targeted piece that can do that—great.
The other area is anything that allows us, as part of the customer journey, to speed things up. So we have some large parts of our business where we work with authorities and receive an awful lot of information in Excel. The ability to take that information, which comes in from a lot of organisations in a million different ways, and to standardise it so our underwriters can respond quickly, has enabled us to reduce case review times from seven hours to a couple of minutes.
That means we can respond to the customer faster, and the underwriter gets to actually spend their time doing the underwriting much more efficiently. So I’d say the highest level of adoption is where you’re removing something that people truly hate—something that causes friction and doesn’t add value. That’s where you’ll always get the fastest adoption.
Kyle: Nice. Linda, have you got an example of where it’s worked internally for you?
Indhira: Yeah, absolutely. So the fastest adoption is where the highest impact is, right? Taking away manual work, just like Alex said, but also making sure that we’re making the underwriter’s life easier—where we’re getting more conversion from quote to submission.
I think that’s one of the biggest challenge areas, and we could see adoption quickly there. But also, insurance is a risk business—risk modelling, pricing, claims—these are the areas actually being revolutionised by using AI. Simply because it gives you more probability to make faster decisions. It just eases the workload when you compare how much data you can crunch over time manually versus with AI.
So I think these are the areas where, internally, I feel the largest adoption is happening—in AI and ML.
Alex: I think the other thing that’s probably just worth saying is, if you can integrate it with something they already use, that helps as well. People don’t like having to go out of one thing and into another and then out of that into something else. If you can integrate it into what they already use—a button in their workbench or whatever it may be—it really helps.
Kyle: Yeah, makes a lot of sense. Greg, are you seeing anything—any kind of trends—in terms of what organisations are able to adopt quickly, and any relationship between parts of the organisation and their readiness?
Greg: Yeah, I think we’re seeing a lot of effective use cases in marketing—especially around digital marketing teams. A lot of companies have huge databases of data, which are actually sufficient to create a decent model. And typically, by nature, digital marketing teams in particular are quite... ready. A bit like insurance pricing—couldn’t be more ready for anything data-related.
To Alex’s point earlier, actuaries have been doing this for decades, right? This is just the next evolution. So pricing tends to be a really good one, and there are some really big win opportunities there.
But yeah, because digital marketers are almost by nature digital-first, data-first... things like segmentation and recommendation can be so difficult. E-commerce teams—similar vibe. There are some amazing use cases there. Some of the big retailers we’re working with are seeing really effective applications in that space.
Kyle: Nice. Very good. Well, we are just about to wrap up a couple of minutes early. There are a couple more questions, but we’ll definitely go over time—and I know some of you have a hard stop at 3:00 PM.
So thank you, everybody, for getting involved—for your contributions and your questions. A very insightful and engaging conversation.
Indi, Alex, Greg—thank you so much for giving up your time and for coming on. Hope you enjoyed it, and thanks for answering all the audience’s questions. Look forward to speaking to you again soon.
Greg: Thanks.
Unlock the power of your data
Speak with us to learn how you can embed org-wide data literacy today.