For this episode, Dr. Diane Hamilton’s guest is Steve Ardire, co-founder of SignalAction.AI. SignalAction.AI is a digital therapeutics company that creates software solutions for behavioral health and well-being. Steve explains how they use total analysis combined with emotional analysis to provide a more accurate indication of a mental state. SignalAction.AI is geared towards telehealth to help therapists and clinicians give personalized treatment via online meetings. Using the reflection analysis multimodal engine, it can pick up on some of the more hidden nuances of the patients’ behavior to shed light on their mental state. Join in the discussion to learn more!
I’m glad you joined us because we have Steve Ardire here. Steve advises AI startups. He Cofounded SignalAction.AI, which is a digital therapeutics company that creates software solutions for behavioral health and well-being. It’s going to be a fascinating show.
—
Watch the episode here
Listen to the podcast here
Software Solutions: SignalAction.AI For Behavioral Health And Well-Being With Steve Ardire
I am here with Steve Ardire, who is an AI force multiplier. He advised 25 AI startups over the past several years and Co-founded SignalAction.AI, which is a digital therapeutics company that creates software solutions for behavioral health and well-being. He’s a speaker. He focuses on augmented intelligence and the future of work. It’s nice to have you here, Steve.
Thank you, Diane. It’s a pleasure to be on your show.
I was looking forward to this because I serve as a board member of a company that is for mental health, behavioral health, and well-being. Having worked in pharmaceutical sales and different things, I got a little bit of background in the healthcare field. I want to know a little bit of background on you. What led you to this point? Give me your backstory if you don’t mind.
I’m a geologist by educational training, which is my first career. It pleases people saying, “How did you make your way to the AI space, that journey?” I did an interview a few years ago and I said, “The glueware for making an easy transition across the decades was in geology, you learn how to do pattern matching, connecting the dots together.” That modality of thinking was similar to what AI machine learning is doing in pattern matching for both your training and then your endpoint outcome. That was the seed corn for getting to where I am.
When I looked at your site, I was looking at the About page. I wrote my dissertation on emotional intelligence so I’m fascinated. You got a big picture of a brain. It’s got emotional intelligence, behavioral intelligence, environmental intelligence, and all these different types of intelligence. Gardner and some of that interested you probably as well. I had Daniel Goleman on the show talking about emotional intelligence. All of these types of intelligence, you’re talking about these with AI. Tell me what you do at SignalAction.
The impetus for SignalAction after doing a lot of advisory for 25 AI startups was mostly in the enterprise space. Over the last few years, healthcare has bubbled up to be the number one vertical with a number of engagements, current ones as well as ones in the past. Detecting through imagery is still the number one use case in healthcare and precision medicine. Like in radiology, detecting breast cancer, all the types of image analysis. That’s the bread and butter. There are also uses in precision medicine, genomics.
When I was looking at the space of mental health with my cofounders, we said, “Why is mental health lagging? Because you can use similar methodologies, more focused though, around voice and emotions,” back to Goleman and your PhD in emotional analytics. It’s only over the past several months that the AI technology for tonal analysis. You can pick up the right type of inflection, tonality figuring out who is stressed out.
That was the impetus for SignalAction.AI. As we say, we’re looking to thread together multimodal assessment. What does that mean? What is the word multimodal mean? That means numerous modalities. There are too many dependencies on doing text chat with a chatbot. It only scratches the surface. If you can incorporate vision along with the voice, and then the emotions, you can reveal deeper insights. That was the impetus for how we constructed the front end of total analysis combined with emotional analysis to provide a more accurate indication of a mental state.
I come in and what happens? What do I go through in the process?
What happens is the following.
Would I do this in a doctor’s office? Where would I do this?
We’re geared towards telehealth. The main impetus for this, Diane, was providing either through Zoom or whatever online meeting, where we can take our reflection analysis multimodal engine that could be running concomitantly with what the clinician or therapist is already guiding their patients through. What we’re picking up on are some of the more hidden nuances and the conversational dialogue about how they’re reacting behavior with emotions. We’re not looking to supplant or replace skilled therapists and clinicians. We’re looking to augment their ability to personalize treatment.
I had Paul Ekman on the show. We were talking about how they did Lie to Me on the show based on all the facial expressions and things, which was great, fun to watch. It was interesting to me how we would have certain faces you’ll make. Even if you’re blind, you’re happy or sad because you haven’t even seen other people make these expressions. That came to mind as you’re talking about this. I’m wondering, can you pick up if somebody is a sociopath or borderline personality disorder or something majorly wrong? Does it tell you that somebody is depressed? What do you get from that?
What the effort that we’re building to be able to get more granular is we’re starting with the main components of levels of anxiety and depression. Those are the two what I call the sweet spots. That’s the metrics out there in terms of post-COVID, 50% of Americans, and it’s similar metrics throughout the world, say the pandemic has harmed their mental health. In teenagers to young adults, you’re getting significant depressive symptoms in over 2/3 of that metric. In that 18 to 24 age bracket, you’re getting more of a potential suicide. That’s big enough for us. We’re well-grounded enough to say that we don’t want to get into schizophrenia. If we can address depression and anxiety or a combination of the two through our multimodal AI, that’s a great enough market impact.
If somebody is going to a psychologist, they’re going in because they’re depressed or something like that. What is this telling you on top of the fact that they’re in there because they’re not feeling great? You’ve got this emotional intelligence, behavior intelligence, environmental intelligence, conversational intelligence, NLP dialogue, and all these things that you list on the site. What kind of data are you giving the practitioner?
We’re doing a near real-time analysis of what’s going on in the session. Can you record your Zoom meeting? Yeah, you can record the patient therapy session as well. With the opt-in of having the camera to be able to do the vision analysis for the emotion, you can start to thread together. The existing tools that are used like GAD-7 and some of the other ones are survey-oriented. We’re looking to tease in more sophisticated methodology with the importance of social likes. Barely, people’s profiles today are Facebook, Twitter and LinkedIn.
[bctt tweet=”The voice is rich. You can figure out who is stressed by picking up the right type of inflection and tonality. ” via=”no”]You can tell a lot about the background or the personas of these people through their social media. That has been lacking in terms of integrating it into the mental health space. When you combine that with the algorithms, that can aggregate social media, data on your smartphone, biometric, electronic healthcare records, and therapy session transcripts. It comes right back to identifying the patterns like we kicked off. That’s a big step forward in terms of being able to augment the capabilities.
It’s interesting. The reach out that we’ve done where we’ve sharpened our messaging, it’s the 20-something to the young 40-something clinicians, psychologists, and psychotherapists that is glomming on to this because they’re more tech-savvy. They can see, “I can use an AI engine to be able to personalize.” You can lock on to the problem set with resolution faster, better and smarter. That’s a good thing for them in terms of their ability to address their patient base.
Emotional intelligence, for example, you include things like interpersonal skills and empathy. There’s a lot involved in knowing your own emotions, knowing those and others and acting appropriately. If I’m talking to my psychologist and I’m the patient and I demonstrate I don’t have empathy towards something, does that pull that up? I’m trying to figure out how it pulls certain things up.
The tough part that we’re working on is the semantic classifier. What does semantic classifier mean? What that means is we can thread together the tonality. What has been used today is the spectrogram, so there are psychotherapists that through a session, just with voice, you can get a lot of reveals. We wanted to bump it a step further by adding the emotion analysis. That’s the interesting part. When you combine the emotion analysis and being able to tap into a more nuanced and hidden perception, that’s where you can glean some of the more hidden insights. Not everything is going to be spoken by the patient, as you know. It’s interesting. I got an incoming from this person that used to be on the SEAL Team. He read one of our posts in terms of what we’re doing in SignalAction.AI and he said, “That’s interesting.” He said, “When we came back from a mission to kill the bad guy, we always had to do a debrief with the psychologist and we lied our way through.”
They knew what they wanted to hear.
That’s right. There’s a silver bullet for you. Through skilled psychotherapists and psychologists, you can still be gamed if you have a savvy patient.
Is it like a lie detector in some respects?
In many respects, absolutely. In other words, if you have more modalities going on besides talking to your psychologist, you can see the facial animation coding system by Goleman. It can reveal some of those micro-expressions that are much more impactful and meaningful and maybe even tap into other parts of your brain rather than the cortex where you can lie your way through if you’re skilled. When you’re getting certain triggers in emotions, that maybe will introduce a degree of, “This is not cohesive and I’m detecting some hidden parameters here that could be revealing.”
I had done a lot of research in my book that I wrote with Dr. Maja Zelihic about perception. You talked about hidden perceptions and we wrote The Power of Perception. We looked at the four factors of the process. We evaluate, predict, interpret and correlate to make these conclusions. We looked at it as a combination of IQ, EQ for Emotional Quotient, CQ for Curiosity Quotient, CQ for Cultural Quotient, and how all these things interacted to have us uniquely look at things. How is these helping practitioners help people see that their perception, even though it’s their reality, might be causing them difficulty?
That’s the heart of the matter. It’s only in the mental health space. When I did this session at a boutique conference, it was titled Contextual AI for Digital Behavior Health. It’s pretty generic. Some of the things that I pointed out here are the video chat sessions with therapists, whether you’re using Talkspace. I don’t want to pick on them, but there’s a plethora of them. I cited a case study where the patient said, “It was so much work to explain myself.” It could be 30 minutes a day per writing and rereading. The messages the therapists receive were spotty and the patient always had to wait a few hours for a response. When it came, it was often generic and impersonal. I felt like the advice that he got was never deep. It was surface level.
There’s only so much the skilled therapists can glean because they don’t have this augmented intelligence engine helping to process what’s going on with things that they can’t pick up on. It’s humanly impossible to be able to monitor the micro-expressions, the facial, and the behavior. That’s where some work has already happened and there are other commercial companies where they’re using machine learning to identify patients with a mix of psychotic and depressive symptoms. You’re starting to see this work not just at universities but health care or mental health startups that are doing this. Mayo Clinic has an algorithm that shows the potential in individualizing treatment for depression. It isn’t our a-ha moment. We’re trying to thread it together in the fashion that we’re doing. By looking at more of the characteristics, the personality of social media, you can be able to come up with a much more cohesive and revealing assessment where you can address that.
It’s interesting because I work on several boards and one of them is Radius AI, which is a local company here in Arizona. They can do recognition software to determine what people purchase and how they go through stores and what they don’t or do while they’re in the store. It’s fascinating how they use AI for that type of thing. I also work on a board, All For Life, which is about mental health and it’s a location for all the different providers to connect. Everybody can find them almost like a Yelp plus an Amazon combined thing, so can opt for services. Where do people connect with you? Is this something that you’re going directly to providers? How is this being delivered?
It took us a while to figure this out. It was a pretty bold endeavor as a nascent that we’ve mostly bootstrapped. We finally found the right partner, a company called Opeeka.com. What they offer is a success-focused offering for mental health, behavioral health and social services. What does that all mean? They provide measurability in terms of running people or having metrics that can guide them through their circle of care. They can guide people through where the staff at a social service or mental health or foster care can have a much better handle on the measurability.
You probably know the metrics. The health system spends $13,000 per person per year for mental health care, but they cannot evidence a positive impact nor reduce costs or know how to reduce costs. With their methods, with Person-Centered Intelligence Solution, the AI identifies what works for them along with the right pathways to establish a success-focused mental behavioral healthcare. The contention is this, for it to address behavioral health, you need to look beyond the clinical record. How many patients even understand what their record is, Epic or Cerner? The patient has no visibility.
That’s where Opeeka’s P–CIS sits on top of these EMR, EHR systems. If you can combine clinical behavioral, environmental, you can get a more complete picture of the healthcare risk. By doing that, you can flip the model. Behavior health is traditionally reactive. It’s only responding when patients admit they’re struggling. With this methodology, with Opeeka, you flip the model and then you can better predict which patients may be struggling before they bring it up and then empower the providers to alleviate the distress before it gets work. It’s practical.
I worked for AstraZeneca for 20 years, 15 in pharmaceutical sales. I can remember selling a migraine drug and going to a doctor’s office. I remember talking to this doctor. You know how in sales, you paint a picture for them and you’re like, “We won’t have that person call you at night because they’re having a migraine. They won’t have to go to the ER. I gave them all the benefits.” He looked at me and he said, “I don’t care if they go to the ER. That’s out of somebody else’s budget.” I’m thinking of the doctor’s perspective because these guys are in business and they’re not all as nice as you think they are. They’re not all as high EQ as you think they are. What is in it for them? If it comes out of somebody else’s budget, is this making them money? Is it giving them more clients? Is it for the nice ones that are trying to help? What’s in it for them?
[bctt tweet=”What you can’t analyze, you can’t measure. ” via=”no”]I’ll tell you what’s in it specifically and I’m going to cite some opportunities since I’m helping Opeeka also close funding. We’re in a number of calls with digital health VCs. They asked the questions when they have California Department of Social Services or Medicaid from state Medicaid departments and Health and Human Services. The Medicaid ones are federally funded. What Kate Cordell, the CEO and Cofounder of Opeeka, talks about is that they’re feeling the pressure. They’re under a lot of pressure to have much better measurability for the money that they’re spending. The same thing in foster kids.
You can present a hard ROI with that. As an example, one of the earlier foster agencies was doing outpatient mental care. All the youth had complex needs, which the average cost of care episode is above $30,000. Using Opeeka, they decreased the cost per patient from $12,000 to below $700 to $800. It’s almost $4,000 per patient savings. There was a 26% decrease in the overall cost of the other metric, the combined cost. There was also a greater than 50% decrease in the cost to achieve an area of skill for a person combined with an increase in resolution. That’s huge.
Everybody says assessment, but what’s your return on an assessment? If you can get a 177 increase in the resolution and then a decrease in the cost to resolve an area of concern, all of that goes right to the bottom line. Those are impressive in terms of the metrics before implementation and then after implementation. They’re hard ROI. You’re not doing what I call the real, slippery folk stuff on saying, “What you can’t analyze, you can’t measure.” That’s where you can provide a much more meaningful impact.
It’s important to be able to do that. I know it’s a hard thing in a lot of companies and startups. When they’re new, they don’t have data sometimes. It’s good to know that there’s some research in what you’re doing and what they’re finding. A lot of people are going to be interested in this. I would like for people to be able to find you and learn more about what you do. Is there some website or something you can share?
Besides my LinkedIn, my website is ForceMultiplierSteveArdire.com. I’ll also text it to you via LinkedIn. That is like a quick synopsis of what I do. The website for Signal Action is SignalAction.ai.
It was interesting to check out the site. I hope people take some time to do that. Thank you so much, Steve, for being on the show. I found this fascinating.
It was a pleasure, Diane. Thank you.
You’re welcome.
Important links:
- SignalAction.AI
- Daniel Goleman – Past episode
- Paul Ekman – Past episode
- The Power of Perception
- LinkedIn – Steve Ardire
- ForceMultiplierSteveArdire.com
About Steve Ardire
Steve Ardire is AI force multiplier. He advised 25 AI startups over the past 7 years and co-founded SignalAction.AI, which is a digital therapeutics company that creates software solutions for behavioral health and well-being. Steve is a speaker and focuses on augmented intelligence and the future of work.
Love the show? Subscribe, rate, review, and share!
0 Comments