Podcast: Play in new window | Download (Duration: 26:36 — 48.7MB) | Embed
Welcome to today’s episode of The Communication Solution podcast with Casey Jackson, John Gilbert and Danielle Cantin. We love talking about Motivational Interviewing, and about improving outcomes for individuals, organizations, and the communities that they serve.
Today we delve into the intriguing intersection of motivational interviewing (MI) and artificial intelligence (AI).
In this episode, we discuss:
- Exploring the intersection of motivational interviewing and artificial intelligence (AI)
- Discussion on the influence of AI in various industries, including branding and marketing
- Contemplating the perception of robots as nonjudgmental and its impact on human interaction
- Ethical considerations surrounding the application of AI in helping professions
- Reflection on the potential benefits and risks of integrating AI in healthcare and behavioral interventions
- Debate on the balance between AI efficiency and preserving human compassion in interactions
- Consideration of the future implications of AI on professions like coaching and therapy
You don’t want to miss this one! Make sure to rate us or share this podcast. It would mean so much to us!
Thank you for listening to the Communication Solution Podcast with Casey Jackson and John Gilbert. As always, this podcast is all about you. If you have questions, thoughts, topic suggestions, or ideas, please send them our way at [email protected]. For more resources, feel free to check out ifioc.com.
Want a transcript? See below!
Hello, and welcome to the communication solution podcast with Casey Jackson and John Gilbert. I’m your host, Danielle Canton. We love to talk about communication. We love to talk about solutions, and we love to talk about providing measurable results for individuals, organizations.
Hi, I’m Danielle Cantin here facilitating the communication solution podcast, and I’m here with Casey Jackson and John Gilbert. Thanks for being here guys. Thanks for showing up, Danielle. Thank you. We’re here to talk about, another journey of motivational interviewing, everything motivational interviewing.
And how it relates to everything else in the world. So, this topic is very hot. It’s AI, artificial intelligence. And I find it incredibly intriguing, being in branding and marketing, and how it’s overtaking that industry. Every single industry is being touched by AI and the influence of AI. And I can’t help, but go back to when I first learned about motivational interviewing, you exposed me to a particular article from the journal of Medical Internet Research, from, I think it was a scientist, the UK and Australian scientist that said, robots are perceived.
As nonjudgmental, more nonjudgmental than people. So I thought I’d kind of kick it off and hand it over to you guys to dive into everything related to AI. Oh, you know, and this is going to be a little bit meta, but what, which is perfect for AI is when I found that article, I was literally in an exercise for myself of not getting sucked into push notifications because I don’t want to keep feeding the algorithms.
This is just my own. Casey Jackson trying to navigate the world and not be overrun by the matrix kind of take on the, on things. And of course, so what happens, I get a push notification from CNN and the little headline on that little push notification was robots. Research shows that robots are more effective at motivation than humans.
So it’s like talk about meta in terms of, I’m trying to get out of this. And they’re throwing me an article on AI and like, it’s literally, those algorithms are that good. So of course, as I’m twitching and trying to not push the link, I obviously end up pushing the link. And my, my issue with it, and I think this is a great way to start off, and it’s going to spark a lot of thought is when I read the article, my issue with it, because John and I, that the areas we’ve been kind of obsessed with is, is fidelity and, and measurability.
So basically when I read the research, what they were saying, the robot quote unquote robot was doing, or AI was doing. Fundamentally, when they listed the parameters, it was not doing motivation. And so that’s kind of what annoyed me that they made a correlation with motivational interviewing just because what the robot was doing was expressing empathy and, doing positive regard and affirmations.
So for me, what I always compare that to, you know, Daniel, you’re always asking kind of from the non clinician perspective. What does that really mean? And for me, like when I use grandma’s recipe analogy, it’s like, well, yeah, you use flour, sugar, and butter for all sorts of recipes that doesn’t make it grandma’s specifically.
It just means that there’s all sorts of things. So when you’re using unconditional positive regard and accurate empathy, that end of itself does not make it. Measurable quote unquote motivational interviewing, but what did strike me and this is where, you know, John was talking about positive regard. I think of it in terms also of equipoise is that the finding when in this research, which is where the, you know, the headline came from was it was nearly unanimous that the participants preferred the robot interaction to the human interaction.
And what was the most consistent theme, it was nearly unanimous is because the robots were perceived as nonjudgmental and that’s one of those things that is fascinating because in some ways you’d think, well, isn’t that what makes us human is having our, our little Pieces of bias that makes us dimensional.
But when people are trying to deal with their own issues, they really don’t care what your bias is. And so I think those are the things where it looks like, wow, you know, where are the benefits to AI and motivational interviewing? And so that’s just kind of my, it’s my brain’s kind of foray into this conversation of like, wow, this is really interesting that it’s starting to penetrate and we know it’s penetrating.
You’re talking marketing, Danielle, where John and I were talking about it in terms of. You know, applications clinically and, and, in coaching and all sorts of other things that people are saying, you know, is AI kind of taking over the world or poised to do that. So that’s my two cents out of the gate.
Oh my gosh. I want to replay the last four minutes and 30 seconds because I had so much going through my mind, but trying to stay present and like, then letting that go and not hanging onto it. And then another thing come in and so much of this, there’s so many things around. What are we trying to do? And one of the things when we’re trying to help someone, or we’re trying to support someone or what are we trying to do with M.
I.? What is A. I. trying to do? Because I just want to like hone it in that if A. I. right now is an algorithm set with lots of information, it can be helpful for. Information to access in some sort of a way, so long as that’s sourced from reliable sources. And that’s on a previous podcast, we talked about, the effective psychotherapy and part of that is having information, but it’s a small part of change and sustain change and being effective with.
Getting help, right? Or, or getting guidance. So my brain goes back to maybe there’s help just in having information. I think there’s, especially for people that are interested and ready, that, that might be very helpful. Google has thrived for a reason and all that. Right. And so there’s, there’s going to be benefit well beyond, am I well beyond, helping professions.
There’s going to be a risk beyond all that. But when we think about in relation to M I is what you were kind of interested in with this, Danielle, it’s like, well, there’s a lot that comes to mind of Casey. What you said, which is. What does it mean to be human? Does it matter? You know, and at the end of the day, what are we, what are we trying to do?
Right? And I think just starting with those questions can be helpful because if we’re trying to find the things that are most helpful for other people, the, the whole thing you went through this morning and, and, and talking about, with, Training and the other podcast on, that around effective psychotherapy.
Great. We’re getting it down. Well, what if a robot could do that? What do we want that? Or do we not want that? I think that’s, that’s kind of a bigger question here rather than. Just because we could, should we, and that’s where, that’s where I question and that might get philosophical, it might get into all my biases, I believe true equipoise is possible with artificial intelligence, true equal position, I don’t believe 100% possible from a human being.
But does that mean that we shouldn’t, that we should go for AI and not a human being? And I’ll lastly say, and then I’m curious on, on either of your thoughts on this, of course, but it’s like, to me it comes back to, do we want to train compassion out of us as a humankind or do we want to go through the, and I can say this from a, or I do say this from a very privileged pla place in a lot of ways.
But to go through what it’s going to take to have more compassion for each other over time, or feel like we can put that to AI, knowing that AI doesn’t have EI, emotional intelligence right now, and then the compassion that we are trying with MI and what it stands for, we just outsource to robots. To do what?
And I just am kind of afraid, and that could just be the fear of the unknown, and all the other Matrix y stuff, but like, that by focusing on AI doing these things, we’re gonna lose compassion for each other, which is gonna have a lot bigger ramifications. So I know that’s beyond MI, but MI so much is about compassion.
That I feel like it relates to training on MI and training other people and how to help people. Yeah. Now, Daniel, I can see why you want to do this as a two parter. Because my brain, if it’s exploded and then trying to come back into, you know, coherent message and then back to exploding again. I know within the last several months that they had a international summit specifically to start to discuss.
Drafting up ethics around the application of AI. And so that’s what I think you’re talking about, John. It’s just the ethical application of it. And it’s, you know, the, the, the thing that most comes to mind for everyone is just even with, you know, the atomic bomb or lasers, there’s so much healing and health that can happen from.
Innovation and technology. And there’s so much damage that can be done by innovation and technology. It is just an ethical consideration. And I think that’s what when you bring it back down to the end of one, which is what I always try to do when I’m looking through that M. I. Lens where I struggle ethically is I look at the cost and the wait lists.
And the number of people that are drifting away from helping professions because of the burnout, especially post coven. So the access to resources is dwindling because they’re expensive. They’re not always effective and efficient because people don’t get the amount of service in the dose that they need.
As readily as they needed, which boom, boom, boom, that gets taken care of with a I, if it’s effective and measurably effective. So it’s like, wow, the benefit to people getting the service they need from something that they self identify as value added. It’s hard to argue. Evolution towards that, but on the flip side, it is really hard to argue that computers are going to be more effective than a human for human interaction and human growth and development.
The thing I think that scares me or causes anxiety in me is just how much obsession and dependence there already is on social media, period, full stop, which means that do people prefer. Interacting with an illusionary world than they do with real people. Because people don’t even like to put their own picture out there until they’ve edited it.
They don’t like to, you know, it’s the whole joke about fake book versus Facebook, you know, that it’s, people just want to put their best foot forward or the, it’s the way you’re perceived in social media thrives off of perception. And I think that’s the thing that makes me nervous is that it’s just our society, our brains culturally are so geared up for that next stage, which would be a I, you know, in the movie, I didn’t see the movie, but I know the trailer from the movie her where he falls in love with his phone, because it’s the best relationship he’s ever had, you know, in the way that she communicates with him and understands him so deeply because it’s all a I rooted.
And it’s like, it’s just, It’s, you know, we, we say it’s science fiction, except for it’s on our doorstep and it’s not science fiction anymore. It’s right around the corner. And so where does this come into with, with evolution or is it the start of, you know, this dystopian society where machines take over, you know, it’s just, I think that’s why this topic is so fascinating.
And then trying to bring it back on track with, you know, it’s, it’s relativity to motivational interviewing.
Yeah, it’s, it’s such a, a wide swath of so many areas to potentially go as well as so many then futuristic scenarios. And one thing that, was on my mind around this, that might wrangle it into some degree, it’s not particularly. Super intently listening to all the pieces you just brought in, Casey, it just kind of pulls one of those strings that it triggered for me, which was that say, as we bring this back into maybe not am I, but into helping professions of a particular kind, I’m thinking.
In the help of something like medicine, because that’s with my background training and being exposed to that and doing some with that, lifestyle or actual, whatever it is, pharmaceuticals, you have the potential to have this algorithmic. Thing that we tend to personify, but is basically like saying we have statistics because that’s what it is.
It’s a set of things. It’s not its own entity as far as we understand consciousness, which is extremely limited in its own rabbit hole. But, with these statistics, we can help deductive reasoning with doctors and other things to accurately diagnose and accurately bring information, education, maybe it’s going to help where it’s needed.
Even far more, hopefully in, say, third world countries and helping women with, you know, education and micro farm loans and all that stuff that I’m hoping for with technology and blockchain and all that. But at the same time, when we’re thinking about it from our vantage point in America, and kind of some of the discussions we’re talking about, I think there is a place.
To help helpers. And that’s what I’ve heard is how adept are you and how open are you to utilizing this statistical tool as it were this algorithmic tool to help you impact people. Now it’s, I think possible to start doing that better in certain ways. But what I am wondering is based off of a book called the more beautiful world, our hearts know it’s possible.
What I pick up from you, Casey is, well, how much beauty is there to that? What does it mean to be in a more beautiful world or what does it mean to have a more beautiful helping situation? Because we can make it better. But I’m the reason my mind is going there. It’s because it’s like, well, we could do these things.
Is it the emotional intelligent of humans? Of helping each other that we want to have AI help with, or do we want AI to take place? Of that for the sake of resources, or do we tear this in a way that as much open access to this level of good care, but if you want to get the human beautiful compassion side of it with another human, that’s where you got to wait 3 months.
I don’t know, but I feel like there’s a, there’s, we’re talking almost like, what do we want with human to human stuff? And what do we want the AI to be? Or do we think the AI is going to take it all over? I don’t think the AI is going to take it all over. At least in any near term future, according to what I’ve learned of the capabilities, but I think there’s a discussion of what do we, what do we want with, with MI or AI and MI with AI and helping professions, what would be the ideal?
Well, I think that, I think that’s probably a smart road for us to go down, because I think that for me, where my brain gets goes into the complicating factors is it’s going to go where there’s the most money to be made. That’s where it’s going to go. You and I are talking from a healthcare behavioral perspective that you look at where health care is gone and where’s the most money being made.
I mean, that’s why there’s all the, you know, stereotypes about death panels and all these different things is how do you increase profit margin? And that is, that is where power is, especially in the U S but globally it’s with. How do we extract as many resources as we possibly can to put a few people in mass amounts of power?
So I mean that I need to keep myself from going down that path is we’re having this conversation because then it’s like, yeah, not sure that has to do with them. I, but where I do think it comes in is I keep going back to access and it’s the same thing you’re talking about the, you know, loans for, you know, women microfarming like in.
Third world country, like it’s, it’s the thing that you have to get into in terms of if it improves a quality of life for an individual in a moment, I think at some point people are getting into, you know, a philosophical splitting of hairs is doesn’t matter if it’s delivered by a robot or by a human, especially if they believe.
In the avatar that they’re talking to seems to have more humanistic qualities than the actual individual may because the individual it’s always, this is what you and I talked about with equity poise and writing reflex fixing reflex is because of our innate biases. It’s hard for us not to be self centered, especially in industries where.
Your credibility has to do with your expertise and how smart people think you are is where your credibility comes from. So where is the credibility when there’s, you know, when you have an app that’s smarter than you are and somebody can touch on it, they’re not going to feel as depressed after, you know, a 10 minute AI interaction.
I don’t, I think that’s where I get nervous about the social media world is that people are more geared for that, I think, than having human interaction. They don’t want to leave their houses and go to an appointment that COVID proved that. You know, people just don’t want to do face. I mean, you’ve got handfuls of people and it tends to be older generations that crave that one on one interaction, but younger generations don’t even pick up the phone.
Anything that can be done electronically would be their preference. So the prep for an AI world is already well prepped, you know, and the fact that people don’t care if there’s an algorithm. If I was just talking to you too, that I wanted to buy, you know, a coffee table and now all of a sudden I get 15 emails on coffee tables that interest me, then it’s like, I don’t really care.
You know, I think that’s. I think, I think people are getting raised with that mindset. So I think what’s the struggle that we think of from our generations is how can you lose that human contact? That is just dystopian. That’s just gut-wrenching to think people would choose that. The problem is I think more and more things are leaning toward, I think philosophically people are saying we don’t want that to happen, but I think on day to day operations, people just want a quick fix and they want it right now.
And if this was a nine out of 10, and I waited three months to get a seven out of 10 experience, I’ll, I’ll get a nine out of 10 experience right now with a AI situation than a seven out of 10 experience, because I waited three months to get it. I think that becomes philosophical individually and collectively.
And I think this is where I struggle with, if I really bring it. To me, as close to home as I can bring it, where my, you know, where I have my own fears around it is because I know that there’s, you know, software out there, AI software that can code conversations, and some aspects they code more effectively than humans code, like we code for motivational interviewing, some aspects of what they haven’t perfected yet, but they’re getting better at it every day.
And so then it’s like, there’s that, there’s one side of me thinks great because then we don’t have so many arguments and interrater reliability because your interrater reliability is going to increase substantially if you know that those algorithms are so well done that you’re going to knock out a lot of the interrater reliability you struggle with from.
Human to human, then I think, okay, so if we’re not doing coding and coaching, what if I trained motivation better than we trained motivational interviewing? Then it’s like, oh, my God. And if they’re doing therapy better than I do therapy, where do I exist in the world? You know, I mean, those are the things that, you know, when you bring it down to that, the reality and I don’t.
I may be delusional to think that’s not going to happen in my lifetime, but I do think that those are real conversations to be had in terms of how does this impact human interaction. So it’s not just the AI side. It can go back to that behavioral health, you know, my, my corner of the world and, you know, working with human beings.
Well, and I think that’s the key. And I know we’re, we’re, looking at, for our mark of, of thinking about a potential transition time with this, but. I just want to highlight even that thinking, Casey, it’s like, there’s so many layers of thinking going on here because there are people that would say, well, then you’d be free to do something else with your life.
And then there’s people that are like, well, wait a minute. I want to do this with my life. So then who says I can’t do that with my life? Well, you could do that with your life, but they’re going to, you know. And it’s like this whole concept, cause that also deals with people losing their jobs, no matter what that job is.
In this case, it happens to be my training, but people that are coal mine workers or whatever, right? Like there’s the perspective of, well, upgrade your, you know, job skills. But then it’s like, well, wait a minute, you don’t have my background. You don’t know, you know? So it’s like that psychology is its own psychology.
Then there’s the psychology of the matrix that it’s going to take over everything. Then there’s the, my psychology of it’s going to take over compassion. For human beings, but I’m also curious about for a future discussion or where we might pick it up of like, well, what if it doesn’t do all of those things, but we could harness it for helping more efficiency and effectiveness of service provision of ways of.
Equally in as much power as we could. I have hope. And because we talked about self fulfilling prophecies before to be cognizant that that hope that we could hold might have some kind of outcome associated with it. That doesn’t need to be metaphysical. That’s where my brain goes for maybe a future discussion, but I just wanted to throw that out there as a potential to hopefully entertain.
Maybe I don’t know, but maybe I’m wrong and just optimistically naive and going to be destroyed by everything I know you that is a probability. We’ll be sitting here in your sandbox playing with your research and all of a sudden a big AI steamroller rolls right over the top of you and I went out, I went out doing something I love, which probably is closer to the point, you know, and I think, I think what we need to dial this in, you know, in a, in a part two with this is trying to reign it in closer to, okay, the world of motivation laying, because this is, you know, part of it is so far above our pay grade to even, Talk on a level of what AI really means so far beyond, but I think it is our best way of starting a conversation for ourselves in terms of, okay, what does this mean in the world of that mind?
I think that’d be a fun one to to make a part two on with this. Yeah, why don’t we do that guys? Because this is really compelling. I like how how big and wide we went. Because those, all of those considerations are really important. And I know my mind, my world just expanded by listening to all of your different points of view.
And then I think about like the way you set up social media, Casey. And what, and John, what do we really want? I mean, let’s dial back, do a, do a part two of this where we start bringing it back and in a little bit more. And it’s just fun being exposed to all of our different biases. And when I think about, when I think about AI, maybe potentially doing things really perfectly and beautifully, I go back to, gosh, I keep going back.
What do I want? What do I want? That was a great way to start the podcast because I’m like, Oh, I believe in the growth of humanity and us constantly growing and evolving and becoming better and better, more compassionate people. So I’m like the friction of having that not be perfect with each other. It’s beautiful and how does AI come in and help us accelerate that?
But again, that’s just me. I had to end with like, Oh, this is my vision of what potentially I want, but that’s what I hope all the listeners are hearing and thinking about for themselves. So let’s circle back on a, round two. Sounds good. Thanks, Danielle.
Thank you for listening to the communication solution podcast with Casey Jackson and John Gilbert. As always, this podcast is about empowering you on your journey to change the world. So if you have questions, suggestions, or ideas, send them our way at [email protected]. That’s C A S E Y. I F I O C. com for more information or to schedule training.
Visit IFIOC. com until our next communication solution podcast. Keep changing the world.
Leave a Reply