Our guest today holds a PhD in organizational psychology and has been working on data products in the health and wellness space for over a decade. We cover a lot of ground in this interview: how to create data products that work, how to avoid the unexpected consequences of poorly designed data interventions, and the importance of ethnographic thinking in data science.
We’ll also talk about reducing friction in data collection, the coaching data product model, and surprising things we can learn when people’s routine’s are broken. From today’s episode, you’ll come away with a better understanding of how to build contextually relevant data products that make a difference in people’s lives.
Ginette: I’m Ginette.
Curtis: And I’m Curtis.
Ginette: And you are listening to Data Crunch.
Curtis: A podcast about how applied data science, machine learning, and artificial intelligence are changing the world.
Ginette: Data Crunch is produced by the Data Crunch Corporation, an analytics training and consulting company.
Today you’ll hear the story of an industry’s decade-long attempt to have a positive effect on people’s health by providing data to consumers. Give people access to their data, and their health behaviors will change for the better. The idea was noble the intention, good, and the approach seemingly sensible. Why, then, have most of the actual benefits not materialized? What went wrong?
If embedded analytics and building data-powered products is something you’re working on, we’ve pulled together a report detailing the top five reasons these projects usually fail, and how to avoid them. We put this together after years of experience and interviews on the topic, so it will definitely help keep you on the right track. You can get the report at datacrunchcorp.com/embed, or datacrunchpodcast.com/embed.
Now let’s hear from Michael Rucker, an organizational psychologist working as VP of technology at Active Wellness.
Michael Rucker: From a data aspect, I drank the Kool aid when there was a TED talk that came out six or seven years ago called “Data’s the Next Blockbuster Drug,” and I was really a firm believer that awareness coupling those ideas with my background in social psychology that, you know, making people aware of what they are doing through data and through back then sort of the infancy of the promise of the IoT world, that empowering folks with their data would be enough to improve outcomes in wellbeing, whatever wellbeing means to you, right? And it’s clear now after doing this almost a decade, that that’s not the case. For the most part unless you’re a quote unquote geek that’s part of the quantified-self movement or knows how to take data and create your own product around it, data in and of itself has proven to have little utility to the common consumer of different wellness modalities and interventions.
And so I think the challenge ahead of us in this current state is how do we then do the heavy lifting for the consumer so that . . . you know, we have all these amazing sensors. I think sensors are one of the most interesting and provocative things to come out of the innovation of IoT, right? We, especially with regards to condition-specific interest of wellbeing, things like glucose monitoring, sleep monitoring, and things of that nature, and so quick aside on how this getting more interesting, right? When I first entered this space, the best sleep tracker was a Xeo, which was a really great device, kind of ahead of its time, but you had to wear a miner’s helmet to be able to extract that data, right?
So you’re not going to get the common consumer to, to wear a device like that. And so how are we going to influence their sleep through data? Now you’re able to do it in a lot frictionless way because sensors are sort of improving so that, that’s one aspect of at least data collection that really is, you know, um, uh, evolving quickly and getting better. But again, what’s not is how do we use that data to influence behavior and help someone’s journey on the path of wellbeing. So it kind of started with this promise. That promise has, has not proven true. And so you find me today figuring out, “okay, so we’re getting better data, we’re getting more data than we ever have, but healthcare is still a disaster here in the US.” Right? And so you’re seeing a lot of provocative folks like myself, write about that. Okay. We keep heralding that this is the era of digital health. Yet if you look at our lagging indicators with regards to success metrics, we really haven’t improved the outcomes for the general populace. And that’s a real problem, right? We can’t just keep touting that we’re making cooler and cooler stuff if it’s not doing anything.
Curtis: Yeah. So, so, sort of this concept of everybody has access or they can get access to their data, but that doesn’t mean they change their behavior and that doesn’t mean they have better outcomes. So how do you overcome that? I’m assuming that’s what you’re working on, right?
Michael: Yeah. And so I think one of the ways that we’re trying to do it is figure out how to make better data products. Right. So I work, my day job is the VP of technology for a company called Active Wellness. We work with hospitals, corporations, and now even residential centers to look for innovative ways to improve the population health of the clients that we serve. And so ultimately it’s how do you blend all of these opportunities to collect data along the journey of the individual’s path towards wellbeing and then package that data and create data products that will actually create contextual relevance to that person’s experience.
Curtis: Got it. Can you help us understand that process a little bit more? Cause I think that’s, that’s really important, right? How do you decide and put systems in place to collect data that’s relevant and then how do you really package that in a way that produces outcomes for people?
Michael: Yeah, so there’s a lot to unpack there, right? Because obviously, this is one of the areas where we’ve broken a lot of eggs, and we’re definitely still learning. The folks that I cohort with are exceptional people, most a lot smarter than I am. You know, Mckenzie guys and there’s folks that I think tank with. And so we really de-engineered the entire customer journey in this regard because at the end of the day, we see in the CPG space, the consumer product good space, that buying a wearable off the shelf at Best Buy just really doesn’t do anything, right? I think the old narrative five years ago would be like, you know, “even my grandma has a Fitbit.” Now if you go to your grandma’s house, that Fitbit is in her junk drawer in the kitchen, right? It’s not, it’s not getting used.
So one is how do we improve the adoption of the devices so that we get more actionable data. And so there are various ways that we’re doing that. One of the ways that Active is engaging in trying to increase the ability to get frictionless data is through the, uh, use of pervasive cameras. And so that has pros and cons, right? Obviously there’s a lot of rhetoric about, well rhetoric and good, healthy discourse. I don’t want to sound completely negative, even though I’m bullish on the technology because I do think we always need to think of consumer privacy, but, you know, there’s a lot of talk about how biometrics are being used and issues like that. But ultimately if we have cameras within our facilities that can get data on you so that you never need to put on an uncomfortable heart rate monitor or it doesn’t add five minutes to the length of your stay within our facility. Therefore, adding additional friction because we know for a broad spectrum of our population, health modalities aren’t looked at as leisure time. Right? I mean, it’s difficult to get into our four walls. And I think that’s one thing that we butted up against for quite some time is that we tend to be fanboys of our product. If you look at the inspirational videos that promote our offerings, and I’m talking about just the general fitness space, people look excited. They look like they’re on vacation. But ultimately, if you really dissect the experience, it takes a lot of willpower to get off the couch, travel somewhere, get onto a treadmill and not get an immediate reward from that experience. Right?
Where I’m going with that is that if we introduce an IoT device that now adds an extra three minutes because you have to put it on, you have to take your shirt off to put it on, et Cetera, et cetera, even those small incremental, uh, you know, uh, pieces of friction in that experience, you can start to see engagement metrics go down.
Then I think one of the most important concepts that’s being talked a lot about right now, at least with the folks that I cohort with, is how do you create data products that are contextually relevant? ‘Cause at the end of the day, we found that even though it’s super interesting to us that someone’s walked 10,000 steps per day, if the end user is being honest, they don’t really care. They want to know that they’re getting healthier. They want to know that, potentially want to know that they’ve created habitual behavior. But at the end of the day, they don’t really care about the steps. So how do we create stories around the data that are contextually relevant to the end users so that data becomes important and it’s not just noise.
Curtis: Right, so, have you had success in doing that in certain cases? I’m curious if there’s any experiences that you’ve had where you said, “okay, this is how we’re going to do this,” and then you had some positive outcomes.
Michael: Yeah. So I think the best way that we’re doing it now is creating ways for people to tell that story, so empowering our employees to tell that story to the individual. Right? ‘Cause at the end of the day, the end user wants to know what they’re doing within our four walls or even, you know, outside intervention. We’re doing a lot in the DPP space, uh, diabetes prevention programs, so how are all of the things that we’re telling someone to do affecting things like biometrics various outcomes, whether that be weight, whether that be resting glucose, you know, whatever the lagging indicator is, what is it, how can they build the bridge between what they’re doing, the efficacy of the intervention and these lagging indicators. And so sometimes I call the swivel-chair integration, right? But ultimately we are using the human element because what we found is that even though we are making some traction on the AI side with regards to scaling some of the efficacy of these programs, at the end of the day, we haven’t solved the piece of extrapolating out the human element.
People really want to feel like they have a relationship with what they’re doing. And so data plays a part in that story, and we found that it’s the most valuable when it empowers the folks that we employ to work with the individual to help them understand how what we’re doing is changing their lives for the better.
Curtis: Interesting. The coaches, they look at the data and then they can then contextualize that for the people working out in your programs.
Michael: Yeah. And so there’s two benefits to that, right? One, if we’re extrapolating out basic bullet points, right, if we’re going to surmise what is the value, one is you’re getting a contextually relevant story to the end user. And then second, you’re tailoring the intervention. So another thing that we haven’t been able to nail, yet we’re using data to make it better, is, is customizing the intervention based on that data. And so instead of putting someone in sort of a waterfall drip campaign, we are now able to use what in Geek speak a more “agile process,” even though at the end of the day, the coach doesn’t realize that’s what they’re doing because they’re able to constantly iterate whatever the protocol or intervention is for the end user through data, which creates a kind of circular loop, which is great. Right? The data feeds a better intervention, the intervention leads to better lagging indicators, and those lagging indicators come back around and create a better invention.
Curtis: That’s really interesting. And just so if I could take an aside here really quick, I’m interested also in the data science aspect of what you’re talking about. From the data you collect, what are you doing to then say, okay, like this is the intervention that that should be done. Is there an algorithm that parses all of that? Are there heuristics? How are you doing that?
Michael: Yeah, so again, back to swivel-chair integration. For the most part it’s done ad hoc. We do have algorithms specifically on the usage side. So I think the most provocative that we’re doing, we’re working with a data science team out of Stanford. And we’re looking at . . . let me back-up a second. So it used to be that we would use basic Monte Carlo. We have a fairly rich dataset through our governing consortium that looks at all the data within a health club space called Ihrsa, I-H-R-S-A, and so the trigger used to be that at the bell of the normal curve that you would want someone to come into the health club two times a week, and if they drop below that two-check-in threshold, that they were a attrition risk. We now know because we have much deeper datasets and we’re able to look at usage outside of just simple check-ins that, that was a bit folly. And I think you’re seeing that across data science, right? That looking at one metric ends up giving you a lot of noise with regards to your decision-making process. And so instead we’re looking at habitual behavior and then delta’s within the habitual behavior.
So for instance, the Stanford Group that I’m talking about, they’re now, they were a bit in stealth mode, but I can finally say their name. It’s Tinoq, T-I-N-O-Q, and what they do using the camera and a technology that I alluded to before was looking at how folks are using specific activity or how they’re engaging in activity within our four walls. And then if that activity itself starts to decline, we now know that something’s potentially wrong and so we can deploy an intervention, whatever that intervention is. And again, we haven’t gotten to the point where the decision engine is going to say, okay, one of these 20 interventions is what needs to be deployed.
There’s still a lot of art to the science once we know. But for instance, like again, where it was as rudimentary as check-ins. Now it can be this individual was running on a treadmill for 28 minutes every session, and they’re still coming in two times a week, but now their sessions or 21 minutes. So they’re either getting close to injury or there’s something going on in their life where that behavior’s changed. But one thing that’s blown away, that’s blown me away from working with these folks—again, I’ve alluded to folks are much smarter than myself—it’s how habitual our behavior is still. The one anecdote that blew me away, there’s many, but this one was just like, oh my gosh, is how many individuals will actually leave our facilities if their particular piece of equipment is broken, even if four identical pieces of equipment are on the left or right of them. People have their favorite elliptical, their favorite treadmill. If that happens to be out of order, you’d be surprised at the amount of people that will just say, “I guess today wasn’t my day to work out” and just leaves.
So, I’ve tried to do a little bit of ethnography, you know, as a good data scientist does to figure out why that is. And unfortunately, you know, the qualitative aspect of that due diligence hasn’t yielded anything ‘cause it always seems to be different. “Like, oh, well these two are too far from the TV, and I wanted to watch my favorite show.” So there are all sorts of aspects of why that might be, but the commonality is people have a routine. And when you, as operators, if we throw up a roadblock or a hurdle in that routine, more often than not, they just won’t engage. And so why that’s important in the context of your question is we have to be extremely careful that if we do create habitual behavior through data or whatever means of improving UX, that we don’t regress and end up getting in their way or tinker too much.
That’s another thing. You know, I’m really a big Jefferies fan, and I love all the lean, sensibilities that come with the Lean Movement. But at the end of the day, I’ve found being an innovator that if I mess around too much, I end up getting really wacky results. And so a good anecdote of that was, we were trying to, we did a small pilot study early on to look at the efficacy of different ways to track data. And one of them was the Withings scale, which is just a wireless scale, so at the end of the day, it doesn’t do much more than help the user store their weight, so instead of doing it in a journal, it pushes it to the cloud. We found out when we put a healthy population through that protocol that there were a couple of people that were at a very healthy weight that all of a sudden restructured their mindset to think that weight should be the KPI that they gauge their wellbeing on. And so we’re actually doing some damage.
I mean, I felt like, you know, I was in the prison study where I might have to pull the plug because we had two individuals when we were doing, you know, when we were collecting qualitative data on their viewpoints of the device, they’re like, “oh my gosh, I gained a pound, so, like, I went into the gym three more times this week.” That’s actually behavior that you don’t want to encourage, right? Especially if the individual’s healthy, ‘cause you might be pushing them on a, on an unhealthy path.
Curtis: That’s really interesting. Okay. So even even measuring something could then affect the outcomes in negative ways just depending on how you measure what you measure and a tinkering, uh, like you said, that’s really interesting. I know we’re coming up on time here, and you had mentioned to me a little bit about some leading indicators, I think, that you are looking at and a book you were working on. So I want to hear a little bit about that and before we wrap up here.
Michael: So I think to build the bridge, with wellbeing in general I’ve been looking at the power of leading indicators versus lagging indicators. And so I think in my space how you could look at that, a leading indicator would be the calories that you take in in a day, right? ‘Cause we know if we eat less calories, we’re likely going to have better outcomes. And so if you can control the leading indicator in a lot of instances with regards to wellbeing, you see higher levels of efficacy with regards to lagging indicators. So using the weight loss metaphor, looking at the scale, which is going to have a lot of fluctuation and not necessarily help a behavioral loop be created.
And so with regards to the book, I’m writing a book about fun because I think happiness is another component of wellbeing, and having moments of fun in your life really is building this same analogy is a leading indicator that if you can kind of look at the fun that you’re having within a given week and create protocols where you instill more fun, you’re actually more likely to engage in happiness, whether in contrast to protocols where you’re kind of gauging a subjective happiness over time. So if anyone has any interest in fun and would like to read a good book about it, I would encourage them to check out my website, michaelrucker.com. And it should be coming out next year.
Ginette: Thank you for listening! A huge thank you to our guest Michael Rucker for coming on today, and thank you for being a part of our growing Data Crunch community. We’d love you hear what you learned and liked most from this episode, so please reach out on social media or via the website, datacrunchcorp.com.
If embedded analytics and building data-powered products is something you’re working on, we’ve pulled together a report detailing the top five reasons these projects usually fail, and how to avoid them. We put this together after years of experience and interviews on the topic, so it will definitely help keep you on the right track. You can get the report at datacrunchcorp.com/embed, or datacrunchpodcast.com/embed.
Attributions
Music
“Loopster” Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/