Does it feel like your stakeholders aren’t open to adopting your team’s algorithms? Ylan Kazi shares his experience on how to conquer this type of problem.

Ylan Kazi: And that is very important. But I think what we find, especially in larger organizations and in trying to implement these things across an enterprise, is that the relationships and the communication are really key. Without those, you can have the best algorithm in the world, but it will be impossible to implement it.

Ginette: I’m Ginette,

Curtis: and I’m Curtis,

Ginette: and you are listening to Data Crunch,

Curtis: a podcast about how applied data science, machine learning, and artificial intelligence are changing the world.

Ginette: Data Crunch is produced by the Data Crunch Corporation, an analytics, training, and consulting company.

If you want to become the type of tech leader we talk about on our show today, you’ll need to master algorithms, machine learning concepts, computer science basics, and many other important topics. Brilliant is a great place to start digging into these subjects. You can learn at your own pace, whether that’s brushing up on the basics of algebra, learning programming, or digging into cutting-edge topics like Neural Networks.

Brilliant is a website, and app, that makes learning accessible and fun. Their approach is based on problem-solving and active learning. Their courses are laid out like a story, and broken down into pieces so that you can tackle them a little bit at a time.

Sign up for free and start learning by going to Brilliant.org slash Data Crunch, and also the first 200 people that go to that link will get 20% off the annual premium subscription.

Now on to our show. Today we chat with Ylan Kazi, VP of data science and machine learning for United Health group.

Ylan: I’ve always been in healthcare analytics, for most of my career, but I would say I really stumbled into data science and machine learning a few years ago. My background is actually in healthcare administration and I was supposed to go the healthcare administrator route, but I ended up going into healthcare consulting. So I started off doing healthcare consulting with electronic medical records. And then transitioned my way into healthcare management consulting. And from there, I worked at Target Pharmacy within their healthcare division, for three years.

And then I transitioned over to United Health Group. Originally when I started on my team close to four years ago, it was mainly an advanced analytics team. Pretty heavy into SAS and SQL. Then once we started to see the power of predictive analytics, really transitioned the team into more of a data science capacity.

Curtis: And you said that there was a point where you saw the value of predictive analytics and you switched your team. I’m curious, because the transition points are interesting. What was it that made you say, “Hey, we should start doing this more”?

Ylan: The biggest thing was we were giving insights to many of our business partners that we work with. So my team is embedded side-by-side with the business and we were looking backwards almost. They’re looking in the past and giving those insights. But what it wasn’t doing is potentially driving future action. So really the power of predictive analytics and just the power of machine learning, in our case, is being able to predict human behavior. And by doing that, we were finding that we could be a lot more proactive and really help out with some of these severe health conditions that our members have. So that was the ‘aha’ moment that we had. 

Curtis: Got it. That’s awesome. And so now that you’re doing some of these things, can you give us a, maybe an overview of the kinds of things you’re looking to predict or are predicting and maybe the impact that has on people and in your industry?

Ylan: Sure. So we focus on basically improving health outcomes for our members. And what that means at a more tactical level is, figuring out which members have chronic diseases. So things like diabetes or heart disease or cholesterol issues- there’s quite a few more, but those are the common ones that affect a lot of people.

Not only our members, but even in the U S and around the world. And, these diseases are very costly over the long term. So if you take somebody with diabetes, if it’s not managed correctly, that person can end up going to the hospital. They can end up, just in a lot of bad circumstances.

Curtis: Got it. That’s really interesting. And what kinds of accuracies are you guys gunning for here? Because, oftentimes if, if a model is even 60% accurate, it’s better than guessing, But I’m assuming in the healthcare industry, you need to have a higher threshold of what you need. What kind of stuff are you looking at there?

Ylan: Sure. I would say depending on our model and depending on the different disease stats, we’re looking at anywhere from 85 to 90%, which is pretty high when you think about it. You’re trying to predict very complex diseases and severities and really what the member is going to do and how they’re going to respond to an intervention.

So we, we try to really prioritize having a very high level of accuracy. The other reason why we do that is because many times we’re working directly with different providers and different clinicians – think doctors, nurses, pharmacists – and there really is a higher bar or a higher standard. Because if a provider doesn’t trust a model, they’re not going to use it. And so we also have to provide a pretty big level of detail to our providers and really try to educate them on how they can best utilize the predictive scoring. 

Curtis: And has there been a lot of pushback on that and people not wanting to trust the algorithm or have people generally seen the value in it?

Ylan: You know what I would say? I’d say initially there was pushback. Luckily, we had created some pretty strong relationships, not only with our business stakeholders, but also with our provider stakeholders. But there always is that hesitation, because the first question we’re all aspiring to get is- okay, you predicted that the member was going to do this. Why? And when you’re using traditional machine learning algorithms, they have a pretty high level of explainability, but when you start to use the more complex ones, or if you even start using things like neural networks, it’s very challenging to have that level of explainability.

And that can really create mistrust because people are a lot more willing to trust a human than a machine or an algorithm. And they’re even more trusting of the human. In fact, human makes mistakes versus if an algorithm makes mistakes. So it’s really being as transparent as possible and showing the value of these predictions, not just, for one member or for a few hundred, but over millions of members.

And I think that was something we learned, slowly, but we found that the more transparent we were, and the more that we could partner with our providers, the more effective these models were. 

Curtis: Got it. That’s interesting. And can you give me just a concrete example so that we can understand maybe on a personal level, how these predictions affect maybe someone in their life, how it can help them?

Ylan: Sure. So if we take a member that has diabetes, for instance, depending on the severity, this person is going to be on one or more medications. And what we really can do is predict over the course of a year, over the course of a few years, how this member is really going to progress in their disease state.

So the more that they can take their medications or see a physician, generally the better off they’re going to be. And they’re also going to be able to control their diabetes much better. We’ll find that there are a lot of different barriers to having someone manage their diabetes. And we try to utilize some of that information within our modeling, to better inform our business stakeholders as well as our providers stakeholders. 

Curtis: Once you predict that someone then has, or someone needs an intervention. how have you found it works to actually help them do the intervention, does the data play a part of that? Or is the data just there to let the provider know that something needs to be done and then they can handle that?

Or I guess what I’m getting at is the data helpful and actually helping a patient take action? Or is it more the data informed someone that then tells the patient they should be doing something?

Ylan: It’s more around the informed piece. So we don’t want, we don’t want to go in the direction of just using one of our predictions to solely drive any type of care, but it’s more of a, it’s more of a tool that our stakeholders can use, in addition to what they already have.

So at the end of the day, we’re wanting to really inform, with, let’s just say our physicians in this case, which members that they need to focus on more. but we’re really relying on that physician to use their clinical judgment, and really provide that care. So it’s more of a, it’s more of a partnership, instead of a replacement.

Curtis: So instead of direct to the patients, it goes through the care providers and they can help. That’s really cool. Seeing that model a lot of other places as well, it seems to work and it seems like you guys are having success with it. Now we’re going to stray a little bit away from the details of the models and things that you’re running.

But, you’ve had lots of experience taking these models from an initial idea, right? A proof of concept and taking that all the way or taking a proof of concept, doing the whole thing, making it successful, and then actually implementing it in a company, which there’s a lot of steps there.

So I’d really love to dive in to that process, to help our audience members understand, where are the pitfalls? How do I succeed with this? How do I make a whole project successful? 

Ylan: Sure. I think that many, many people in our industry and, as people are becoming data scientists, there’s such a strong focus on technical development, knowing the right languages, understanding the algorithms, having the math background, et cetera.

And that is very important. But I think what we find, especially in larger organizations and in trying to implement these things across an enterprise, is that the relationships and the communication are really key. Without those, you can have the best algorithm in the world, but it will be impossible to implement it.

And I think that was something that was very eye-opening for me. And it’s also something that, as I’ve talked to my peers, within the healthcare industry, and then even within the broader industry, that’s a common pain point. We have a great team, very talented.

They create great models, but when we try to actually get them implemented and we want our company to use them, that’s where many of them fail. And, from my standpoint, I learned pretty early on that I needed to do a better job of creating more of these relationships, maintaining the relationships, and then also showing, the value of these models.

That’s a huge piece if it’s, if it’s not going to, in our case, improve patient outcomes, or if it’s not going to have a positive ROI, It’s going to be very difficult to convince people to use it. So that’s, that’s a big part of it. And then I would say the, the other thing that comes to mind is, many people are used to doing something that works really well, and it can be very challenging to convince them, to try something new that, could even outperform their current method, but because their method is working so well they can be very hesitant to transition to the new way. 

Curtis: How do we do that? You’ve had success now in walking this process, what kind of advice, or what kind of things would you tell people who are trying to do this and maybe running into the same problem?

People are people that want to implement this. It works better, but they’re comfortable with what they have. how do you go about, making that change? 

Ylan: The way that I started was really creating a business case. So finding a business problem or a business challenge, and then figuring out if machine learning could be applied to it and in doing this, you don’t just create one business case.

You can identify 5, 10, 20 different business cases, but having, and starting with the portfolio. And then from there, really determining what the critical eye, what one or two business cases out of all of those would warrant machine learning, because, contrary to what I think many people outside the industry think it can’t be applied to everything.

It’s really a very specific tool. And we’re wanting to find where will it provide the most value? I think starting there. And then, the relationship piece it’s really bringing on, whoever is going to either be a part of the solution or be effected by the solution, bringing them on very early on and getting their feedback.

It’s always fun to, disrupt and to innovate and be that person in charge of it. But, we’ve all been on the other side where somebody else is doing the disruption and the innovation, and it changes up what we do in terms of our work or our role. So it’s going to be very, empathetic from that point.

And then I would say the other piece is, bringing in the subject matter experts. So with our, with any of the modeling that we do or that people do in other companies, the data scientists will eventually develop that subject matter expertise, but in our case, I can’t expect them to be a doctor or a nurse.

So it’s very useful and also, very eye opening to include subject matter experts early on. 

Curtis: A couple of questions there. One is, you mentioned building our business cases and, how do you go about identifying good business cases where machine learning can have an impact? Are there certain criteria or features that you look for in the business that kind of say- okay, yeah, this would be a really good machine learning problem to solve that could have a high impact. I think again, a lot of people may maybe trying to do this, right? The ideation phase of what could we even do with machine learning. What’s a good case? What’s a bad case? Are there things that help you do that? 

Ylan: Sure. I think the biggest way that we do that once we create our portfolio of business cases is just prioritizing them on which business cases are going to help the most people, which business cases are going to have the greatest return on investment.

Because if a model is going to help a handful of people, and it’s only going to add $20,000 in value, probably doesn’t make sense to spend three months on it. But if it’s going to potentially help millions of people and create millions of dollars in value, that is something that peaks our interest and where we’ll go a little bit more in depth on to really, really scope it out.

Now, depending on the industry that you’re in, obviously we’re in healthcare. We’re all about our patients and our members. But I think that same type of prioritization can be applied, in different types of organizations. And I’m sure there’ll be more revenue focused or profit driven, but that seems to be one of the best ways to do it.

So that’s normally how we start. I would say the only other consideration that has come up a few times is if there’s a business problem and a lot of traditional analytics have been applied to it. But the problem is still there. That would indicate that, obviously the current solution is not working.

Could machine learning be applied to it and, really help improve the business challenge. 

Curtis: You also mentioned the importance of, domain expertise, right? And there’s a couple, there’s two different ways people approach this, that I’ve heard. One is they hire the data scientists and the machine learning engineers that have that domain expertise already, or they hire people with that domain expertise that are analytically inclined and train them on the data science, or you just hire someone that’s really good at data science and then bring in subject matter experts to work with them.

Have you had experience with both of those models or have an opinion on which one works better? Maybe both of them work or what are your thoughts there? 

Ylan: I’ve had, I would say I’ve had more experience on having the subject matter experts. So having the data scientist who has a healthcare background, and I think the benefit to that is that when they first joined your team, you’re not having to onboard them through, for both your processes as well as healthcare knowledge.

So they’re able to onboard much quicker, and really get into the details much quicker. I have had, a few data scientists on my team come from outside of healthcare and it’s not necessarily a bad thing, but it’s different. So you’re having to take him down to training paths, wanting your processes, and then one of the other on healthcare knowledge, that model though can also be advantageous. So bringing in a data scientist that doesn’t have the subject matter expertise, because many times they’re coming from a view, beginner type view, so they can ask very simple questions that many times, if you’ve been in the industry, you won’t even think about asking them. So I would say that there’s in an ideal world, whatever industry that you’re in, you would have a few data scientists on your team that have experience in the industry, but at the same time, you would have at least a few of them that can bring, a fresh worldview to your team and really ask, some of the simple questions that people in the industry would not have thought. 

Curtis: I’m curious if there’s any experiences that come to your mind where someone like that had been on your team and they asked some question, and you’re like, “Oh, that’s like adolescent field”, but then it sparked an idea. Right? Has that, is there a use case where that has happened? 

Ylan: I’m trying to think of the, trying to think of the best one. I would say more generally if I have to think of, just a few of the different instances where it occurred, generally it’s around speed, right?

So when you’re doing machine learning in a healthcare organization, it’s one of the most highly regulated industries in the world. And there are so many rules and regulations and patient protections in place that have to be followed. And I think what can occur is that things can move so slowly, because people are wanting to obviously stay compliant to it and make sure they’re not running a follower of any rules and regulations, which is a great thing that’s the last thing you’d want it to. But that can almost go too far sometimes, especially when you’re doing machine learning experiments, or you’re doing initial test cases. And I think what I’ve found is that it’s been nice to see with my data scientists that have come outside of the industry, how fast they can move and how, how they can apply that even within a very heavily regulated environment. So from that standpoint, it’s really around speed. And how do you be very efficient, even when you’re given a lot of constraints and when you’re having to work across the enterprise.

Curtis: Now you mentioned something when we talked before that I wanted to touch on here and get your thought on.

You mentioned that when you’re trying to explain machine learning solutions, too, you’re trying to get input from or buy in from other leaders. There’s two levels of the spectrum, right? Sometimes people think of AI as the sky net, that’s going to come in and kill everyone. Some people think of it as just like some basic analytics. But really they should be thinking of it in the middle of that. Is there a way that you help non-technical business leaders understand what machine learning is, how it can help them and things like this?

Ylan: Yeah, that’s where I would say I spend a lot of my time actually. And, I didn’t, if I think back to a few years ago and just the evolution of my team, that’s something that I did not spend enough time on initially. And I think I really underestimated that piece of how long it would take, because what you’re finding is as you create the relationships and you maintain them with all of your different stakeholders, it’s very easy, because I’ve been in the field for a while, to assume that people have the same level of knowledge that you do. And they really don’t. And I think, especially with our stakeholders that are not technical, it really starts by educating them and showing them what is, what is AI, what is machine learning? What can it do and what can’t it do. Because they’re getting bombarded, by vendors, by marketing about, oh, AI is going to cure everything or, even the scariest stuff, like you mentioned about Skynet, it’s going to, it’s going to take over humanity and humans won’t be here anymore.

So it’s the education piece. And I think they use the term evangelism, right? So data science evangelism, but it’s showing the value, but also being very, very rational about it. And not being alarmist. And I think that has helped immensely because then, your stakeholders are able to ask, sometimes very basic questions, but they feel comfortable to ask them because you’ve created a safe space.

I think many times people get, they’re afraid to ask a question because it is so basic and people won’t think that they’re smart, but it’s very important that they understand what machine learning is, how it can be used. And the fact that it’s one of many tools. 

Curtis: And how do you keep up with that? The space moves so fast and there’s so many things going on. New research is coming out every week, almost. How do you as a practitioner kind of keep on top of what can machine learning actually do and how is it moving forward and how can I take advantage of these new, developments? How do you stay on top of it all? 

Ylan: Really the biggest way is just reading. So reading about some of these new developments, looking at newer research papers as well, I find that can be very helpful. I do actually try to avoid reading about artificial intelligence or machine learning in the news.

Like just the regular news, because many times it’s either alarmist, which is not helpful, or it is not interpreted correctly. It’s almost as if the reporter took a snippet and then tried to expand it and just, not make it relevant. Sure. So that’s always a challenge- getting bombarded with it, but, definitely research papers. And then, I think with just my team in general, all of us are trying to stay, as updated as we can given our fast moving. So I also look to my team and, if there are new developments that could help us out or just something that people find interesting, we do knowledge shares across our team.

Curtis: Are there certain places you go to look that you would recommend to people like, yeah, this is like a good source, where you can find legitimate information? 

Ylan: One of the best ones, that I’ve found, it’s a little bit more, detailed in depth, but it’s called archive.

arXiv. And there, I’d say pretty much every day, there are new papers being published on that site. It’s very easily searchable as well. So if you have a specific subtopic that you’re looking for, you can probably find it on there. I would say, outside of that, in terms of the more, maybe some of the more major publications, I’d say even things like nature, the nature of publication every once in a while, they’ll have some articles around machine learning or AI.

I’m trying to think if there are any others- national geographic here and there. Yeah. I can’t think of any other major ones. I would say avoid any of the major newspapers, right? 

Curtis: You mentioned arXiv, which is a great resource, maybe a little bit more, it’s you know, research papers, right? So it’s fairly technical. Have you guys found, the value from looking at arXiv. Has it been more to expand your mind on what the technology is doing or have you also been able to take certain papers and actually implement those models into things that you’re doing? I’m just curious what the value is you extract from there for you and your team? 

Ylan: I think the biggest value is, testing and experimentation. Some of the papers that are written on arXiv. Those solutions to actually implement them. We’re probably still three to five years out, just in general with some of them, but it does help to really spur a higher level of creativity.

And I think when you see something that somebody else has done, all of a sudden, it makes it a lot less intimidating to try and implement that yourself versus being that first mover and having all that uncertainty. So it can actually create more confidence in your team, to get more creative and to really push the limits of innovation.

And I’ve found that to actually be one of the best parts of reading some of these papers and doing some of these experiments. 

Curtis: So we’re coming up on time here.  I want to leave you with the last words. If there’s anything you feel like we’ve missed, you feel is important that you’d like to share with the audience or even how to get contact with you or your company. I’ll let you, I’ll let you take it. 

Ylan: I would say just in our discussions that we’ve had, artificial intelligence, machine learning, it’s really going to be impactful in the future. It’s going to fundamentally change how we do business, how we work. It’s going to change every single industry and it’s really going to become infused in all industries. So it’s a very impactful change. And, it’s just going to be amazing to see what people are going to create and how it’s going to be used for good. I think the flip side of it though, is, with any technology, any technology is amoral, right? It’s really up to how people use it. Machine learning can also be used very, very unethically or very dangerously. And so that’s also something that we have to keep in mind.

One of the things that has not been discussed enough just in the entire industry is machinery ethics. And I think that this needs to become more front and center because it really provides a good framework of what we should and should not be doing with machine learning. 

Curtis: Agreed. There’s some inroads there, but it is definitely not enough yet.

Thank you so much for being here. This has been a really great episode, I think, and people will appreciate hearing your expertise. You’ve done a lot of interesting things.

Ginette: A huge thanks to Ylan Kazi for being on our show. As always head to datacrunchcorp.com/podcast for our transcript and attributions.

Attributions

Music

“Loopster” Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 3.0 License

http://creativecommons.org/licenses/by/3.0/

Does it feel like your stakeholders aren’t open to adopting your team’s algorithms? Ylan Kazi shares his experience on how to conquer this type of problem.

Ylan Kazi: And that is very important. But I think what we find, especially in larger organizations and in trying to implement these things across an enterprise, is that the relationships and the communication are really key. Without those, you can have the best algorithm in the world, but it will be impossible to implement it.

Ginette: I’m Ginette,

Curtis: and I’m Curtis,

Ginette: and you are listening to Data Crunch,

Curtis: a podcast about how applied data science, machine learning, and artificial intelligence are changing the world.

Ginette: Data Crunch is produced by the Data Crunch Corporation, an analytics, training, and consulting company.

If you want to become the type of tech leader we talk about on our show today, you’ll need to master algorithms, machine learning concepts, computer science basics, and many other important topics. Brilliant is a great place to start digging into these subjects. You can learn at your own pace, whether that’s brushing up on the basics of algebra, learning programming, or digging into cutting-edge topics like Neural Networks.

Brilliant is a website, and app, that makes learning accessible and fun. Their approach is based on problem-solving and active learning. Their courses are laid out like a story, and broken down into pieces so that you can tackle them a little bit at a time.

Sign up for free and start learning by going to Brilliant.org slash Data Crunch, and also the first 200 people that go to that link will get 20% off the annual premium subscription.

Now on to our show. Today we chat with Ylan Kazi, VP of data science and machine learning for United Health group.

Ylan: I’ve always been in healthcare analytics, for most of my career, but I would say I really stumbled into data science and machine learning a few years ago. My background is actually in healthcare administration and I was supposed to go the healthcare administrator route, but I ended up going into healthcare consulting. So I started off doing healthcare consulting with electronic medical records. And then transitioned my way into healthcare management consulting. And from there, I worked at Target Pharmacy within their healthcare division, for three years.

And then I transitioned over to United Health Group. Originally when I started on my team close to four years ago, it was mainly an advanced analytics team. Pretty heavy into SAS and SQL. Then once we started to see the power of predictive analytics, really transitioned the team into more of a data science capacity.

Curtis: And you said that there was a point where you saw the value of predictive analytics and you switched your team. I’m curious, because the transition points are interesting. What was it that made you say, “Hey, we should start doing this more”?

Ylan: The biggest thing was we were giving insights to many of our business partners that we work with. So my team is embedded side-by-side with the business and we were looking backwards almost. They’re looking in the past and giving those insights. But what it wasn’t doing is potentially driving future action. So really the power of predictive analytics and just the power of machine learning, in our case, is being able to predict human behavior. And by doing that, we were finding that we could be a lot more proactive and really help out with some of these severe health conditions that our members have. So that was the ‘aha’ moment that we had. 

Curtis: Got it. That’s awesome. And so now that you’re doing some of these things, can you give us a, maybe an overview of the kinds of things you’re looking to predict or are predicting and maybe the impact that has on people and in your industry?

Ylan: Sure. So we focus on basically improving health outcomes for our members. And what that means at a more tactical level is, figuring out which members have chronic diseases. So things like diabetes or heart disease or cholesterol issues- there’s quite a few more, but those are the common ones that affect a lot of people.

Not only our members, but even in the U S and around the world. And, these diseases are very costly over the long term. So if you take somebody with diabetes, if it’s not managed correctly, that person can end up going to the hospital. They can end up, just in a lot of bad circumstances.

Curtis: Got it. That’s really interesting. And what kinds of accuracies are you guys gunning for here? Because, oftentimes if, if a model is even 60% accurate, it’s better than guessing, But I’m assuming in the healthcare industry, you need to have a higher threshold of what you need. What kind of stuff are you looking at there?

Ylan: Sure. I would say depending on our model and depending on the different disease stats, we’re looking at anywhere from 85 to 90%, which is pretty high when you think about it. You’re trying to predict very complex diseases and severities and really what the member is going to do and how they’re going to respond to an intervention.

So we, we try to really prioritize having a very high level of accuracy. The other reason why we do that is because many times we’re working directly with different providers and different clinicians – think doctors, nurses, pharmacists – and there really is a higher bar or a higher standard. Because if a provider doesn’t trust a model, they’re not going to use it. And so we also have to provide a pretty big level of detail to our providers and really try to educate them on how they can best utilize the predictive scoring. 

Curtis: And has there been a lot of pushback on that and people not wanting to trust the algorithm or have people generally seen the value in it?

Ylan: You know what I would say? I’d say initially there was pushback. Luckily, we had created some pretty strong relationships, not only with our business stakeholders, but also with our provider stakeholders. But there always is that hesitation, because the first question we’re all aspiring to get is- okay, you predicted that the member was going to do this. Why? And when you’re using traditional machine learning algorithms, they have a pretty high level of explainability, but when you start to use the more complex ones, or if you even start using things like neural networks, it’s very challenging to have that level of explainability.

And that can really create mistrust because people are a lot more willing to trust a human than a machine or an algorithm. And they’re even more trusting of the human. In fact, human makes mistakes versus if an algorithm makes mistakes. So it’s really being as transparent as possible and showing the value of these predictions, not just, for one member or for a few hundred, but over millions of members.

And I think that was something we learned, slowly, but we found that the more transparent we were, and the more that we could partner with our providers, the more effective these models were. 

Curtis: Got it. That’s interesting. And can you give me just a concrete example so that we can understand maybe on a personal level, how these predictions affect maybe someone in their life, how it can help them?

Ylan: Sure. So if we take a member that has diabetes, for instance, depending on the severity, this person is going to be on one or more medications. And what we really can do is predict over the course of a year, over the course of a few years, how this member is really going to progress in their disease state.

So the more that they can take their medications or see a physician, generally the better off they’re going to be. And they’re also going to be able to control their diabetes much better. We’ll find that there are a lot of different barriers to having someone manage their diabetes. And we try to utilize some of that information within our modeling, to better inform our business stakeholders as well as our providers stakeholders. 

Curtis: Once you predict that someone then has, or someone needs an intervention. how have you found it works to actually help them do the intervention, does the data play a part of that? Or is the data just there to let the provider know that something needs to be done and then they can handle that?

Or I guess what I’m getting at is the data helpful and actually helping a patient take action? Or is it more the data informed someone that then tells the patient they should be doing something?

Ylan: It’s more around the informed piece. So we don’t want, we don’t want to go in the direction of just using one of our predictions to solely drive any type of care, but it’s more of a, it’s more of a tool that our stakeholders can use, in addition to what they already have.

So at the end of the day, we’re wanting to really inform, with, let’s just say our physicians in this case, which members that they need to focus on more. but we’re really relying on that physician to use their clinical judgment, and really provide that care. So it’s more of a, it’s more of a partnership, instead of a replacement.

Curtis: So instead of direct to the patients, it goes through the care providers and they can help. That’s really cool. Seeing that model a lot of other places as well, it seems to work and it seems like you guys are having success with it. Now we’re going to stray a little bit away from the details of the models and things that you’re running.

But, you’ve had lots of experience taking these models from an initial idea, right? A proof of concept and taking that all the way or taking a proof of concept, doing the whole thing, making it successful, and then actually implementing it in a company, which there’s a lot of steps there.

So I’d really love to dive in to that process, to help our audience members understand, where are the pitfalls? How do I succeed with this? How do I make a whole project successful? 

Ylan: Sure. I think that many, many people in our industry and, as people are becoming data scientists, there’s such a strong focus on technical development, knowing the right languages, understanding the algorithms, having the math background, et cetera.

And that is very important. But I think what we find, especially in larger organizations and in trying to implement these things across an enterprise, is that the relationships and the communication are really key. Without those, you can have the best algorithm in the world, but it will be impossible to implement it.

And I think that was something that was very eye-opening for me. And it’s also something that, as I’ve talked to my peers, within the healthcare industry, and then even within the broader industry, that’s a common pain point. We have a great team, very talented.

They create great models, but when we try to actually get them implemented and we want our company to use them, that’s where many of them fail. And, from my standpoint, I learned pretty early on that I needed to do a better job of creating more of these relationships, maintaining the relationships, and then also showing, the value of these models.

That’s a huge piece if it’s, if it’s not going to, in our case, improve patient outcomes, or if it’s not going to have a positive ROI, It’s going to be very difficult to convince people to use it. So that’s, that’s a big part of it. And then I would say the, the other thing that comes to mind is, many people are used to doing something that works really well, and it can be very challenging to convince them, to try something new that, could even outperform their current method, but because their method is working so well they can be very hesitant to transition to the new way. 

Curtis: How do we do that? You’ve had success now in walking this process, what kind of advice, or what kind of things would you tell people who are trying to do this and maybe running into the same problem?

People are people that want to implement this. It works better, but they’re comfortable with what they have. how do you go about, making that change? 

Ylan: The way that I started was really creating a business case. So finding a business problem or a business challenge, and then figuring out if machine learning could be applied to it and in doing this, you don’t just create one business case.

You can identify 5, 10, 20 different business cases, but having, and starting with the portfolio. And then from there, really determining what the critical eye, what one or two business cases out of all of those would warrant machine learning, because, contrary to what I think many people outside the industry think it can’t be applied to everything.

It’s really a very specific tool. And we’re wanting to find where will it provide the most value? I think starting there. And then, the relationship piece it’s really bringing on, whoever is going to either be a part of the solution or be effected by the solution, bringing them on very early on and getting their feedback.

It’s always fun to, disrupt and to innovate and be that person in charge of it. But, we’ve all been on the other side where somebody else is doing the disruption and the innovation, and it changes up what we do in terms of our work or our role. So it’s going to be very, empathetic from that point.

And then I would say the other piece is, bringing in the subject matter experts. So with our, with any of the modeling that we do or that people do in other companies, the data scientists will eventually develop that subject matter expertise, but in our case, I can’t expect them to be a doctor or a nurse.

So it’s very useful and also, very eye opening to include subject matter experts early on. 

Curtis: A couple of questions there. One is, you mentioned building our business cases and, how do you go about identifying good business cases where machine learning can have an impact? Are there certain criteria or features that you look for in the business that kind of say- okay, yeah, this would be a really good machine learning problem to solve that could have a high impact. I think again, a lot of people may maybe trying to do this, right? The ideation phase of what could we even do with machine learning. What’s a good case? What’s a bad case? Are there things that help you do that? 

Ylan: Sure. I think the biggest way that we do that once we create our portfolio of business cases is just prioritizing them on which business cases are going to help the most people, which business cases are going to have the greatest return on investment.

Because if a model is going to help a handful of people, and it’s only going to add $20,000 in value, probably doesn’t make sense to spend three months on it. But if it’s going to potentially help millions of people and create millions of dollars in value, that is something that peaks our interest and where we’ll go a little bit more in depth on to really, really scope it out.

Now, depending on the industry that you’re in, obviously we’re in healthcare. We’re all about our patients and our members. But I think that same type of prioritization can be applied, in different types of organizations. And I’m sure there’ll be more revenue focused or profit driven, but that seems to be one of the best ways to do it.

So that’s normally how we start. I would say the only other consideration that has come up a few times is if there’s a business problem and a lot of traditional analytics have been applied to it. But the problem is still there. That would indicate that, obviously the current solution is not working.

Could machine learning be applied to it and, really help improve the business challenge. 

Curtis: You also mentioned the importance of, domain expertise, right? And there’s a couple, there’s two different ways people approach this, that I’ve heard. One is they hire the data scientists and the machine learning engineers that have that domain expertise already, or they hire people with that domain expertise that are analytically inclined and train them on the data science, or you just hire someone that’s really good at data science and then bring in subject matter experts to work with them.

Have you had experience with both of those models or have an opinion on which one works better? Maybe both of them work or what are your thoughts there? 

Ylan: I’ve had, I would say I’ve had more experience on having the subject matter experts. So having the data scientist who has a healthcare background, and I think the benefit to that is that when they first joined your team, you’re not having to onboard them through, for both your processes as well as healthcare knowledge.

So they’re able to onboard much quicker, and really get into the details much quicker. I have had, a few data scientists on my team come from outside of healthcare and it’s not necessarily a bad thing, but it’s different. So you’re having to take him down to training paths, wanting your processes, and then one of the other on healthcare knowledge, that model though can also be advantageous. So bringing in a data scientist that doesn’t have the subject matter expertise, because many times they’re coming from a view, beginner type view, so they can ask very simple questions that many times, if you’ve been in the industry, you won’t even think about asking them. So I would say that there’s in an ideal world, whatever industry that you’re in, you would have a few data scientists on your team that have experience in the industry, but at the same time, you would have at least a few of them that can bring, a fresh worldview to your team and really ask, some of the simple questions that people in the industry would not have thought. 

Curtis: I’m curious if there’s any experiences that come to your mind where someone like that had been on your team and they asked some question, and you’re like, “Oh, that’s like adolescent field”, but then it sparked an idea. Right? Has that, is there a use case where that has happened? 

Ylan: I’m trying to think of the, trying to think of the best one. I would say more generally if I have to think of, just a few of the different instances where it occurred, generally it’s around speed, right?

So when you’re doing machine learning in a healthcare organization, it’s one of the most highly regulated industries in the world. And there are so many rules and regulations and patient protections in place that have to be followed. And I think what can occur is that things can move so slowly, because people are wanting to obviously stay compliant to it and make sure they’re not running a follower of any rules and regulations, which is a great thing that’s the last thing you’d want it to. But that can almost go too far sometimes, especially when you’re doing machine learning experiments, or you’re doing initial test cases. And I think what I’ve found is that it’s been nice to see with my data scientists that have come outside of the industry, how fast they can move and how, how they can apply that even within a very heavily regulated environment. So from that standpoint, it’s really around speed. And how do you be very efficient, even when you’re given a lot of constraints and when you’re having to work across the enterprise.

Curtis: Now you mentioned something when we talked before that I wanted to touch on here and get your thought on.

You mentioned that when you’re trying to explain machine learning solutions, too, you’re trying to get input from or buy in from other leaders. There’s two levels of the spectrum, right? Sometimes people think of AI as the sky net, that’s going to come in and kill everyone. Some people think of it as just like some basic analytics. But really they should be thinking of it in the middle of that. Is there a way that you help non-technical business leaders understand what machine learning is, how it can help them and things like this?

Ylan: Yeah, that’s where I would say I spend a lot of my time actually. And, I didn’t, if I think back to a few years ago and just the evolution of my team, that’s something that I did not spend enough time on initially. And I think I really underestimated that piece of how long it would take, because what you’re finding is as you create the relationships and you maintain them with all of your different stakeholders, it’s very easy, because I’ve been in the field for a while, to assume that people have the same level of knowledge that you do. And they really don’t. And I think, especially with our stakeholders that are not technical, it really starts by educating them and showing them what is, what is AI, what is machine learning? What can it do and what can’t it do. Because they’re getting bombarded, by vendors, by marketing about, oh, AI is going to cure everything or, even the scariest stuff, like you mentioned about Skynet, it’s going to, it’s going to take over humanity and humans won’t be here anymore.

So it’s the education piece. And I think they use the term evangelism, right? So data science evangelism, but it’s showing the value, but also being very, very rational about it. And not being alarmist. And I think that has helped immensely because then, your stakeholders are able to ask, sometimes very basic questions, but they feel comfortable to ask them because you’ve created a safe space.

I think many times people get, they’re afraid to ask a question because it is so basic and people won’t think that they’re smart, but it’s very important that they understand what machine learning is, how it can be used. And the fact that it’s one of many tools. 

Curtis: And how do you keep up with that? The space moves so fast and there’s so many things going on. New research is coming out every week, almost. How do you as a practitioner kind of keep on top of what can machine learning actually do and how is it moving forward and how can I take advantage of these new, developments? How do you stay on top of it all? 

Ylan: Really the biggest way is just reading. So reading about some of these new developments, looking at newer research papers as well, I find that can be very helpful. I do actually try to avoid reading about artificial intelligence or machine learning in the news.

Like just the regular news, because many times it’s either alarmist, which is not helpful, or it is not interpreted correctly. It’s almost as if the reporter took a snippet and then tried to expand it and just, not make it relevant. Sure. So that’s always a challenge- getting bombarded with it, but, definitely research papers. And then, I think with just my team in general, all of us are trying to stay, as updated as we can given our fast moving. So I also look to my team and, if there are new developments that could help us out or just something that people find interesting, we do knowledge shares across our team.

Curtis: Are there certain places you go to look that you would recommend to people like, yeah, this is like a good source, where you can find legitimate information? 

Ylan: One of the best ones, that I’ve found, it’s a little bit more, detailed in depth, but it’s called archive.

arXiv. And there, I’d say pretty much every day, there are new papers being published on that site. It’s very easily searchable as well. So if you have a specific subtopic that you’re looking for, you can probably find it on there. I would say, outside of that, in terms of the more, maybe some of the more major publications, I’d say even things like nature, the nature of publication every once in a while, they’ll have some articles around machine learning or AI.

I’m trying to think if there are any others- national geographic here and there. Yeah. I can’t think of any other major ones. I would say avoid any of the major newspapers, right? 

Curtis: You mentioned arXiv, which is a great resource, maybe a little bit more, it’s you know, research papers, right? So it’s fairly technical. Have you guys found, the value from looking at arXiv. Has it been more to expand your mind on what the technology is doing or have you also been able to take certain papers and actually implement those models into things that you’re doing? I’m just curious what the value is you extract from there for you and your team? 

Ylan: I think the biggest value is, testing and experimentation. Some of the papers that are written on arXiv. Those solutions to actually implement them. We’re probably still three to five years out, just in general with some of them, but it does help to really spur a higher level of creativity.

And I think when you see something that somebody else has done, all of a sudden, it makes it a lot less intimidating to try and implement that yourself versus being that first mover and having all that uncertainty. So it can actually create more confidence in your team, to get more creative and to really push the limits of innovation.

And I’ve found that to actually be one of the best parts of reading some of these papers and doing some of these experiments. 

Curtis: So we’re coming up on time here.  I want to leave you with the last words. If there’s anything you feel like we’ve missed, you feel is important that you’d like to share with the audience or even how to get contact with you or your company. I’ll let you, I’ll let you take it. 

Ylan: I would say just in our discussions that we’ve had, artificial intelligence, machine learning, it’s really going to be impactful in the future. It’s going to fundamentally change how we do business, how we work. It’s going to change every single industry and it’s really going to become infused in all industries. So it’s a very impactful change. And, it’s just going to be amazing to see what people are going to create and how it’s going to be used for good. I think the flip side of it though, is, with any technology, any technology is amoral, right? It’s really up to how people use it. Machine learning can also be used very, very unethically or very dangerously. And so that’s also something that we have to keep in mind.

One of the things that has not been discussed enough just in the entire industry is machinery ethics. And I think that this needs to become more front and center because it really provides a good framework of what we should and should not be doing with machine learning. 

Curtis: Agreed. There’s some inroads there, but it is definitely not enough yet.

Thank you so much for being here. This has been a really great episode, I think, and people will appreciate hearing your expertise. You’ve done a lot of interesting things.

Ginette: A huge thanks to Ylan Kazi for being on our show. As always head to datacrunchcorp.com/podcast for our transcript and attributions.

Attributions

Music

“Loopster” Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 3.0 License

http://creativecommons.org/licenses/by/3.0/