TRANSCRIPT
Matt Wilkinson and Jasmine Gruia-Gray discuss findings from their ELRIG Drug Discovery 2025 survey, revealing a critical gap between AI access and adoption in life science organisations. The conversation explores why 38% of employees are becoming "secret cyborgs" - experimenting with AI personally without organisational guidance - and what commercial leaders must do to close the readiness gap before it becomes a competitive liability.
Introduction and Survey Context
Matt Wilkinson [0:04]
Matt, hello and welcome to a splice of Life Science marketing. This episode is brought to you by humantic Ai. Imagine if you could close 22% more deals overnight using a buyer intelligence platform. Imagine slashing prep time by 70% and boosting close one deals by 37% that's what customers the world over achieving using humantic AI's account and buyer intelligence system. Today, we're going to be staying on the theme of AI and talking about some of the results that we got when we were interviewing exhibitors in L Riggs drug discovery, 2025 conference in Liverpool. I'm joined today by my co host Jasmine. Hey there, Matt, and we're going to be digging into all of the all of the responses that we got when we were talking to our exhibitors, or the exhibitors of El Rigg up in up in Liverpool.
The Light User Paradox: Why 44% Have Tried AI But Aren't Using It Daily
Jasmine Gruia-Gray [0:57]
Yeah, there are so many surprises in this survey, it's almost hard to figure out where to start, but let's kick it off anyway. So we know that from the survey, 44% of the respondents have tried AI but aren't using it daily. They've overcome some skepticism, but haven't seen enough value yet. Why do you think these light users hold the key to AI growth?
Matt Wilkinson [1:30]
I like to think of the analogy of AI being a bit like a gym membership. You can have a license or you can have a gym membership, but if you don't ever grow, you never grow. And I think that's exactly the same sort of thing. So if you've got a light user, it's not so much that they've they've not got the gear, they haven't invested enough time to start seeing the gains that they would be getting. It's not about getting more information. Everybody knows that the more you go to the gym, the fitter, the stronger you get, and it's the same by exercising your AI muscle, if you will. Really, it's not about telling people how I want it's really about showing them the use cases that matter and then getting them to be able to understand how they can then adopt it for their own bespoke use cases, B to B marketing terms. It's the difference between selling capability and really selling the resolution to a problem. It's a lot
Jasmine Gruia-Gray [2:21]
about how people learn. People really learn by showing each other rather than just telling each other and then expecting you're going to go off as a novice and try it out on your own. So yeah, I think use cases of all varieties are super important, not only to embed it in sort of your current processes, and as we learned about taking the donkey work out of out of what you're doing, this is sort of the mundane stuff, but also helping to stretch and augment what you're doing. Still requires people to show you when for you to be able to ask questions and learn that way?
Matt Wilkinson [3:04]
No, absolutely. I mean, as our friend Dr Lisa Palmer, who's a real AI expert, would say, you know, we have to show AI and not tell it. And I think it's, I think it's really important that people really get the opportunity to roll their sleeves up and become familiar with these, you know, these really, really powerful assistants that can help them accomplish so much more than they could do on their own.
The Dunning-Kruger Effect: Are Power Users Overestimating Their Readiness?
Jasmine Gruia-Gray [3:27]
So do you think these two groups that we're calling regular users and power users are overestimating their AI readiness?
Matt Wilkinson [3:36]
I kind of feel that actually, that the regular users probably not, but I think the power users probably are underestimating their capability. There's something called the Dunning Kruger effect, which means that sort of, the more that you know, the more you realize you don't know. And so the most advanced users of AI that I know actually realize how little we know about how AI actually works, the inner workings of the AI itself. I mean, even the experts that are building these large language models don't really know what's going on within those neural networks. So, I mean, we have to be really careful to understand that they are black boxes that we really don't understand the insides of. And so it's, it's really important for us to realize that actually, as users, that definition can become quite tricky to understand. What does that look like and the well, realms of coding. There's a lot of people that are using vibe coding and using platforms like lovable and others, or people are creating agents in platforms like M A 10 that have multiple steps. But then when you look at the real capabilities out there that some of the advanced Apple people that are applying this, there's a gentleman called Reuven Cohen that is using agent swarms to create vast amounts of code and to check that code. And when you start seeing how people are really using this in practice, you start realizing that even if you I'm a power user within the Gen AI tools for creating text and creating multi step workflows, that actually is an advanced compared with some you know, where some other people are going with this, particularly in the world. Realms of
Jasmine Gruia-Gray [5:00]
coding, I completely agree it's I think the Dunning Kruger effect really summarizes it really well.
Cautious Optimism and the Psychology Behind AI Sentiment
Matt Wilkinson [5:06]
And I know I've been introduced once or twice as an AI expert, and I shy away from that absolutely, because I know how little I know. I think one of the really interesting things that I found was just how people felt about AI itself. And in terms of that, I was surprised that more than 60% said that they were cautiously optimistic about the use of AI. How did you feel people responded to that question?
Jasmine Gruia-Gray [5:31]
I actually wasn't too surprised at this cautious optimism. In part, a lot of sales and FASS are scientists at heart. By the way, we didn't see very many marketers and product managers on the trade show floor, which is why I didn't mention them in the at the beginning here, they're scientists at heart. And as scientists, we're looking for the evidence. We're we build up that evidence and that knowledge and test it out over the course of time, and in many cases, the exhibitors we spoke to were just getting their hands wet on using AI and in thinking about use cases and practicing it out and having others to talk with and brainstorm with. So I think part of that cautious optimism, optimism comes from that that scientific background. I think the other part is that many of whom we saw and spoke with, as the previous slide showed, were light users. So it's almost a factor of, well, I don't know what I don't know. There's also what you hear a lot of in the news. Should I be worried about it taking my job? So that was lurking somewhere in the background, I think unnecessarily. But that's, you know, that's a fact of what, what some people were thinking. I think some of the caution also comes from what people are hearing about guardrails and depending on the organisation. Some some organisation had really well thought through guidelines and how tos, but others did not, and I think all of those things sort of factor into this cautious optimism. Did you see the same thing?
Cognitive Offloading: The Hidden Risk of AI Dependency
Matt Wilkinson [7:27]
Yeah, I actually found this question quite interesting, because as I was asking it, I realized that I probably am cautiously optimistic myself, but actually, and that's how I would have responded, but actually, I'm both excited and optimistic, but I'm also very concerned about the risks. There are risks in terms of the way that cognitive offloading. I think there are big risks about the data that we're giving up to organisations that are owned by some very, very wealthy and powerful people. You know, you just have to look at the really, the race towards AGI and the number of data center, the vast number of data centers that the US is is building in comparison with the rest of the world. And then you start looking at who are the big players that have big vested interests in that, and we're almost concentrating the power of this intelligence in a handful of very, very powerful people, and that that that in itself, carries some risk. And so the more you do to sort of understand the kind of the politics behind who's creating the models, the politics behind who's funding the models, and actually the risks to the economy because of the vast amounts of money that's going into AI development. There are some risks that I think we all need to pay attention to, because they risk not just necessarily our data. And, you know, the potential risks of AGIS going, you know, going mad and wiping out humanity in kind of Terminator style. But there's also the risk of, what does that this mean towards the, you know, the general financial situation of the world? And so I think there's some some economic risk in there as well.
Jasmine Gruia-Gray [8:52]
I think it's worthwhile pausing here for a second and talking a little bit more about cognitive offloading, because some of our listeners may not know that term to me what, what is most important to understand about cognitive offloading is this trend towards people giving up the ability to think and not continuing to exercise that really, really important muscle, so that it becomes an exercise where whatever the AI tells you, you just accept it as fact, rather than constantly challenging it and constantly thinking yourself, what other questions are there behind the information that the AI has given me. Do you have another interpretation of cognitive offloading?
Matt Wilkinson [9:45]
Maybe not a different interpretation. But I think there's two key things that I've definitely seen, and people are starting to make memes on social media about this. And as soon as you start seeing as a meme, you start knowing that it's real. And so I think that the big concern that I have is. Making sure that we're we're still exercising our mental muscles. There are definitely things that we don't need to do maybe as frequently as we can offload some of the drudgery work, but we really need to make sure that we're exercising the things that keep us human. I like to think of AI as a bit like a cover. It's great. They create fantastic things. And actually, when it's really tracked, well trained, and you give it great context, it can do a really, really good job of creating stuff. And actually it can do a better job than most people can, but you still have those geniuses where there's the creative spark, the ability to do that. And in our own right, as human beings, we all have have our own genius. And I think we have to be really careful to preserve our own genius and not offload everything, so that we still have our own opinions, that we still start to think about, how do we do this? There's a fantastic couple of episodes of The Simpsons where they're talking about what they're calling cheat GPT. And again, you know when, when the Simpsons is featuring something that it's become something that you need to pay attention to, as you know, as a societal shift and something that is worth paying attention.
Jasmine Gruia-Gray [11:03]
I think that's well said. I tend to think of cognitive offloading as the sort of but what if situation? It's about constantly asking the what if it's this and not that, to constantly challenge yourself, as well as not accepting what, what the AI has said.
AI-Assisted Analysis: Correlation vs Causation
Matt Wilkinson [11:25]
I think even looking at the results of this report, absolutely used AI to look at correlations across different data, different questions, and that was really, really interesting. AI is brilliant at find correlations that maybe that humans wouldn't find, but it can very easily fall into the trap of assuming that correlation equals causation, and humans do that as well. But I think that, you know, we need to be able to learn from our own experience, to be able to say, hey, this hasn't necessarily caused this. It may be a byproduct of something else. And I think that really then speaks to one of the more unsurprising things that we found was that software as a service companies, those companies that do an awful lot of software development, had, by far and away, the highest level of AI maturity. And so that was then really, really interesting to see. But then, of course, they're working with code, and code has already been one of the most disrupted areas of the industry, where I don't necessarily know that we're seeing huge numbers of developers being laid off just yet, although that may be coming soon, but we are seeing huge increases in the amount of code that's being created by AI. And so I think that that just goes to show that those organisations that treat AI as an infrastructure, non novelty, it really is just baked into their daily workflows, and then they're looking at, well, this is our core product, but we could also use it in sales. We could also use it in marketing, and so it's far more accepted across the organisation, whereas, I think in some of those organisations, where maybe we're more focused on the lab, AI, doesn't impact day to day operations as much, just yet. What was your perception of those conversations?
Jasmine Gruia-Gray [12:58]
I think software as a service or SaaS companies are so much more mature than than the exhibitors, more closely associated with our industry, the tools providers, the reagent providers, even the automation folks, because they've got all the use cases right. Coming back to what we said earlier, people learn from use cases, and it just felt more like a natural fit for them to adopt AI much sooner than any of us had, because they had the proof right in front of them. So even in folks that said their organisation is not using AI or it's not embedded in their processes, 38% of those folks are experimenting personally with AI. It feels almost like a grassroots movement that suggests AI innovation is bubbling up from the bottom. Should leaders identify and empower these internal AI champions, even if they are light users, instead of imposing a top down program. Do you think
Secret Cyborgs: The 38% Experimenting Without Guidance
Matt Wilkinson [14:10]
this is a really interesting observation? It's one that's not new. Those 38% of individuals are what I like to call secret cyborgs. I think it was a term that was coined by Ethan monarch, Professor of innovation at Wharton University, and these secret cyborgs are essentially using AI in private because they're using it in their daily lives, and they're bringing those tools into work. It's pretty much like saying, Hey, we're not allowed to search the internet anymore at work. And so for those people, why would I not use AI? The problem with that is, is that very often, people that are using AI, personally, they don't have paid for accounts. They may not have set up privacy restrictions on them, and they may not have been given guidance around what data they can and can't use in the AI. And so that's really where the real risk lies. So for me, that the first bit is. Is about understanding that this is going to be happening in any organisation. And an organisation that doesn't yet have an AI policy doesn't necessarily mean to say that they don't yet have aI users just because you haven't given people licenses doesn't mean to say people aren't using it. So that's the first thing that's that's absolutely clear to me. Then I think it's the case that we really need to be building both top down and bottom up approaches to usage of AI, maybe even also sort of side on from it as well. So from a top down perspective, what we really need to have in place is to understand a culture of experimentation within the organisation and actually enabling people within a set set of guardrails. Maybe there's some approved tools that you're allowed to use, there are some approved ways of using that and actually encouraging people to be able to use those tools. And one of the most important things about that is, I think it actually helps us to not only start to understand the tools, start to understand what we can achieve and actually benefit the organisations that we work in, but it also helps to allay the fears that we're going to be made redundant because of AI. So that's a really important thing, from a bottom up approach, absolutely identify as AI innovators, celebrate them, celebrate those use cases, and encourage that culture of experimentation, and also write experimentation and making sure that we're doing that. There's some great examples of this across the life sciences. You've got organisations like Promega, like moderna, that were very, very early on in the adoption of AI in various different ways. And they've really, really been able to achieve some fantastic things within their organisations. Speaking with people in Promega, is adoption as uniform as the case study on the open AI, you know, website might say, probably not, but the adoption is there, and everybody's using it, everybody's becoming aware of it, and people are using these things without, without it being in the shadows. And that's really, really important.
Jasmine Gruia-Gray [16:55]
I think this is an opportunity for companies to elevate the culture of collaboration within their companies, and have groups made up of of the C suite, made up of middle managers and people on the ground level to come together and create some initiatives. It doesn't have to be really extreme business cases that require integration of multiple systems, but create an initiative within each department so that you get the collaboration from the users with the C suite and its purpose built for that particular department. And that way, I think you integrate guardrails along with use cases, along with sort of taking the fear factor out of learning it and embedding the so what into your day to day. When we
Beyond Text Generation: Where AI Sophistication Breaks Down
Matt Wilkinson [17:54]
when we asked about what people were using it for? Unsurprisingly, 75% were using AI for text generation after you are for analytics or automation, or maybe they didn't know that. What do you feel that sort of next natural, next step on that sort of sophistication ladder sort of comes into what did you hear from, from folks on the on the exhibition floor?
Jasmine Gruia-Gray [18:17]
It was very natural for people to to to use it for content for content creation, for content editing. I didn't find very many people using it for graphics creation. For example, I didn't find very many people using it for process efficiencies. And, you know, taking the grunt work out of scenarios where you're copying and pasting from one document into another document, or for creating dashboards in that can be updated in real time from meeting minutes that you just completed a few hours ago. So I think part of it is that folks, again, need more use cases. They they need more inspiration for what what's possible before they're going to understand how to go up that, that sophistication ladder beyond content creation,
Matt Wilkinson [19:22]
it's interesting, isn't it? I think the other thing that that really struck me was that people aren't even thinking about more than I'm going to create a piece of content. They're not necessarily thinking about, how do I actually daisy chain prompts or, you know, custom gpts, whatever we want to call them, AI assistants together to achieve tasks and that may end up with still needing to copy and paste text out together. Didn't hear many people really think using it for reviewing or even that much for research, although a few were using it for competitive analysis, which I did think was a really interesting change from maybe some of the things that we found out when we were talking to people in in Washington over the summer.
Jasmine Gruia-Gray [20:00]
Yeah, brainstorming is a fantastic use case. You know, I'm going to put a shameless plug in here for Atlas and Persona AI and creating a GPT that is a synthetic customer that you can then brainstorm with that synthetic customer on your marketing content, on your sales presentation. And here's shameless plug number two, for folks who are in sales and marketing that really want to dig deeper into use cases and how to think beyond the traditional uses of AI, I strongly suggest they attend the upcoming samps conference, the sales and marketing professionals and science conference on December 3 and fourth in Boston.
Matt Wilkinson [20:48]
Yeah, of course. I hope they come as well, because I'll get to meet them in person, which would be fantastic. So how do you think we can sort of help close that confidence gap between light and power users of you know, for myself, because, you know, I've been realized I had to dive as deep as I could into this early on, I find it quite difficult for me to sort of see that, that disconnect between, you know, the fear of adoption. So what can we do to really help people become more familiar with using these tools?
Closing the Confidence Gap: AI Clubs and Train-the-Trainer Models
Jasmine Gruia-Gray [21:15]
Here's where I'm going to combine a little bit of high school with our day to day lives, professional lives, high school, from the sense of we always used to have these after school clubs. You know, there was the chess club, and that's where you got to teach each other moves and learn from each other. So combine sort of a club kind of atmosphere with an initiative in your your office or which ultimately means a train, the trainer perspective. So even if you're a light user, you can contribute to helping others learn how to use AI and build different use cases. And the novice. The beauty of a novice is in asking questions. There's no such thing as a dumb question, especially when it comes to AI. And that way the light user and even the power user help is learning more and can sharpen their skill sets as well. So that that's, that's my challenge to our listeners, built, built an AI club and work together and try it out.
Matt Wilkinson [22:24]
Yeah, one of the things that always surprises me is how few people actually ask the AI how to do that. How to prompt the AI better. When Claude launched its skills, which was in the last month or two, I was quickly came up with the use case of being able to then call that, call my, you know, one of my groups of Persona AI into any chat that I have within Claude, and I didn't know how to do it, so I upload how to create a skill, and I gave it all of the materials, and it did everything for me. It created the database it needed to and then it told me how to go about putting that skill in in the back end of Claude. So then I would have suddenly got a skill that I can use wherever I want to use it. The AI is incredibly helpful. So just ask it to help you do what you want to achieve, especially, you know, with it itself. It doesn't necessarily always have the most up to date knowledge about how to do things within itself, which is always interesting. So you do sometimes have to rely on a web search, but the information is there. And I think being able to collaborate with the AI, and getting the AI to help you get to where you want to go. That can be really powerful. The other thing I found to be incredibly helpful as a top tip for anybody that's struggling to get out of an AI, what they want to get to if they put their prompts in and they've got a first draft of something, and then they get to, you know, they've gone through edits, and they end up in a certain place that's different to the original output, put that final copy back in and say, Hey, this is after we've, you know, been through editing and everything. This is where we ended up help me build better prompts, guide my prompting, so that we would get closer to this, you know, this final output. And by doing that, you actually end up getting to a point where you can build something that's repeatable and really, really robust
Jasmine Gruia-Gray [24:03]
and keep remembering challenge AI's answers. It may not link to what your experiences are. It may not really understand the nuances of what you're trying to get out of it. So use it as a companion. Ask it questions back and forth, just as you're saying. So do you think that AI maturity could become this new differentiator, and differentiate between thriving and lagging commercial teams?
AI Maturity as Competitive Differentiator
Matt Wilkinson [24:34]
Absolutely the ability for people using AI to accomplish things in almost no time, compared with or to compress timelines to create something is just astronomical. You can create websites in minutes. You can create new position. You can interrogate positioning through the eyes of your persona. And if you're using persona AI, you can create new text. You can create new things. Now, of course, there's going to be things about getting it so that it's. Is ready, and that you're comfortable with it, and there's still a need for the expert in that loop. And please don't think I'm ever going to say that. There isn't, but the speed at which you can operate, the volume at which you can create really, really high quality outputs, the way that you can become more focused on the customer through using AI, you know, if you're if you're able to, you know, create customised messages to your different persona within your buying group, which you absolutely can then that's more likely to hit if you're then going to take that a step further and understand somebody's role and then maybe their personality type using a tool like humantic Ai, apologies for the shameless plug, there you'd be. You're able to get even closer to how not just what looked important to that person in their role, but also what's important to how they want to receive information. And so by doing all of those things, what you're really able to do is to accelerate your path to results. And of course, we're still working with humans, at least for the time being, but I think that we really have to be able to look at there's that moment where if we don't jump on board, we can, we're going to end up in trouble. You just have to look at the contrasting fortunes of two of the world's biggest ad agencies. Up is currently struggling and has seen significant amounts knocked off its market value, whereas publicist has invested heavily in AI and has seen a completely contrasting shift in shift. So while WPP has lost about 5 billion euros off of itself, 5 billion pounds off of its share price over the last three, three or four years, publicist has done almost the opposite. And I think that just goes to show where there's other things going on behind the scenes, but that that ability to adopt and adapt is just so crucial for organisations today.
Jasmine Gruia-Gray [26:44]
Yeah, I'd add another word to the adopt and adapt. It's preparedness as professionals. This is, this is the gift that keeps on giving in helping you prepare for a customer presentation and debate with it what objections the customer is likely going to ask. It helps you prepare for an internal marketing presentation. It helps you become more persuasive when you want to ask your manager something for something. So I think that it's absolutely going to be the in all of our professional lives as well as our personal lives.
Creating Space for Human Connection
Matt Wilkinson [27:26]
No, absolutely, I think it's, I think it's one of those things where we're just going to have to be really, really careful about maintaining the personal connections and the creating those spaces to connect with real human beings, but we can use AI to help us get to that and maybe create more space to spend more time with humans, rather than just spending time behind spreadsheets.
Jasmine Gruia-Gray [27:48]
Completely agree, yeah, it is about those relationships and giving us back time to have deeper relationships with with the folks that we care about, as well as new folks that we want to learn about. Well, this was spectacular. Thank you. Really appreciate your insights, Matt and looking forward to seeing everybody again on a splice of Life Science marketing the podcast. Thank you, Jasmine
Matt Wilkinson [28:18]
and thank you for everybody that's listened. You