Today, our guest, Alain Briancon, will talk to us about how to work with Fortune 500 companies and help them get quick value from their data, how to build a roadmap of incremental value during the data collection and analysis process, how they help predict and incentivize customer purchases, and how to dial in on an idea for successful data science software companies.
Alain Briancon: Adding one more question to answer is always easy. The difficult part is what question can I remove and still providing insight.
Ginette Methot: I’m Ginette
Curtis Seare: And I’m Curtis
Ginette: And you are listening to Data Crunch
Curtis: A podcast about how applied data science, machine learning and artificial intelligence are changing the world.
Ginette: If you’re a fortune 1000 company, and your team needs to be trained in Tableau, Statistics, Data Storytelling, or how to solve business problems with data, we’ll fly one of our expert trainers out to your site for a private group training. The most important investment a business can make is in its people, so head over to our site at datacrunchcorp.com and check out our training courses.
Today, our guest, Alain Briancon, will talk to us about how to work with Fortune 500 companies and help them get quick value from their data, how to build a roadmap of incremental value during the data collection and analysis process, how they help predict and incentivize customer purchases, and how to dial in on an idea for successful data science software companies.
Alain: My name is Alain Briancon. I am currently the VP of data science and chief technology officer for CEREBRI AI. CEREBRI AI is an AI company, as the name could guess. We are located in three cities: Austin, which is the corporate headquarters; Toronto, which is a hotbed of data science in North America; and Washington DC where I work. What CEREBRI AI focuses on is developing a system to help manage above the strategic component as well as the tactical component of customer experience. This is my fifth startup. This is my third startup that involves data science and machine learning. Jean Belanger, who is the CEO of CEREBRI is a friend of mine; now he’s my boss. So I’m trying to work through that, and it took him about 19 years to convince me to join a startup, uh, with him. And this was the right opportunity because the kind of problems we are solving are very challenging.
It has been a, an absolute blast. Besides working with a great team and building it up. But when I joined we were about 20 people. Now we’re about 63 people, about 50 of them on the technical side. Half in data science, half in software. What has been fantastic is applying tricks and insight that I’ve gained over the years to, uh, help guide the data science side. The other thing also, which is fun, is we have a very pragmatic view of how to approach things and how to approach engagement with customers. Our customers are fortune 500 customers; they are major banks. One of them is a Central Bank. Others are car makers and we’re working very hard into the telco business as well. And, uh, when you deal with such companies, first of all, a very interesting sell cycle in which data science and machine learning play a role at the right moment in time.
But you have to also be humbled by the fact that you don’t start on their side from a clean sheet. And I think that’s one of the most interesting component of making things work is bring data science and machine learning insight to companies who cannot afford and we should not afford the, “okay, let’s start from scratch. Let’s share all of the data in the like,” and so vis Jujitsu between the business case that machine learning brings up and the underlying machine learning technology is one of the most fun element of the work.
Curtis: That’s interesting. Let’s, let’s dig into that if we can. Can you give me a concrete example in CEREBRI AI how that works and spell out that concept for us?
Alain: Sure. We’re working with global OEMs on the, on the automotive side and with them as well with banks, one of the early questions you hear when engaging customers is “what data do you need?” And the wrong answer is all, so you need to go to modeling. You need to go to bringing your knowledge and we deliver our insight, not through models. We deliver our insights through software. We are a SaaS company, which brings another level of challenges, difficulty and reward all at the same time. So when we work with them, we start with, “well, we’re going to go figure it out.” Which is a little daunting at times because it’s like “How will you know?”, I says, “Well we’ve developed tools embedded in our software that allow us to assess the quality of the data and the type of data needed in order to provide the support for both the strategic objectives that the company wants to as well have as the tactical results that they seek to get.” So our approach is one at the beginning of explain to them that you can start developing insight, you can start developing results by having partial access to the information, and then the software will guide us and say, “if you were to give us more marketing information, we would do better.
If you were going against more customer care information, we would do better. If you were to give us more information about the way your products are structured, it will get better.” So our approach is one of get improvement, business improvement, as quickly as possible and the early returns will guide us collectively to what is the next piece of data we need? What is the next piece of data afterwards? And I found that, for one, very refreshing because big data is always dirty data, and the larger the set you work with at the beginning, the more you will be hampered by any kind of imperfections, and there will be imperfections. So our approach is one of systematically having a roadmap to the data ingestion, the data cleaning, and the like, and it is a time surprising to the customers we engaged because others say just let get access to all the data, and we’ll get it done.
Our approach, because we are guided by the software platform is to do it more incrementally and systematically. At the end we will likely use all the data sets that others would be using and the data we use are corporate data, but we found that this incremental approach to providing results to the customer is important. And what I like about it is if someone goes on the limb in a business and say, “I want to invest in big data. I don’t want to build a huge data science team.” You can’t tell them wait a year until you get a result. It will not sustain them internal to the organization. So our approach is let’s make sure we get results to the customer within the trimester. So getting something done within three months from an impact point of view is something that we drive as much as possible. And our view, my view, if let’s say you’re going to increase your sales by let’s say 10% or by 12% increasing your sale by 10% in three months rather than waiting a year to increase your sale by 12% is a better value proposition for the businesses.
Curtis: So this concept of land and expand, quick wins is, you know, a better way to approach data science, then to just do it all or try to do it all and not succeed.
Alain: Yeah.
Curtis: So can you tell us a little bit about your approach to the data science side? So that’s kind of the business side or part of the business side obviously. What’s your approach to the data science side?
Alain: So the, the, the opportunity on the data side is dictated by those engagement principles. So the way we engage things, first of all, we deliver everything through software. Our software runs on premise or behind a corporate firewall and we want to first of all make sure that we can deliver on Azure, AWS, GCP, and the likes. So this puts some requirement in the underlying software that will support our modeling, and it takes a lot of thinking about it. The other thing that we do is we want our modeling capabilities to be data-set agnostic. And that means that we have to create a very systematic software orchestrator, we call it our AI orchestrator. That goes through all the steps of ingestion, all the steps of labeling, all the steps of ontology management, all the types of modeling, and all the steps of deploying the model objects onto software separately.
So others are showing pipelines that have two or three transformations. In our case, we have 10 zones of transformation of the data and of the modeling. And so we, we had to invest ahead of the deployments in, in this orchestrator that combines the best of data science and software. From a pure data science point of view, we work mostly in the structured data world. I mean we know how to do natural language processing, sentiment analysis, and the like. There are lots of libraries we could have access to. But because work first and foremost with corporate data, structured data is the order of the day. So we divide any problem around four key question, who is committed to do something? When are we going to do something? What to sell them and what to offer them. So what is the object of the transaction, and how to get them to act. And these four pillars are designed independently of one another; they reinforced one another. And any modeling solution we provide are a composition of those four basic pillars. Who is committed is a scaled measurement of propensity. When to act is really something quite unique to us, which is a estimation of when people are making decisions ahead of a purchase or ahead of the renewal or ahead of a subscription. We spend an inordinate amount of time doing time, date, and calendar feature engineering. Uh, to put it mildly. To the point, for instance, that with one customer, we predicted who was going to buy a car within a month period, and when they send a list to their regular customer with, you know, appealed to buy a car versus what we did, we were seven times better, not 7%, seven times better.
So we, we have decided that, that we would make time management a core competency. So we manage that as a separate pillar. The what to sell when to sell is a autoML affinity model. So we aggregate a lot of different techniques and we work on the optimization. The when to act, which is the fourth, but I would say now the primary model pillar, is a reinforcement learning model, but allows us to not only look at the best action that you need to take in order to get customer to get engaged, but the one after and the one after and what have you. And so our approach to the data science is to force ourselves to answer very specific question, to do the best of answering those very specific questions and then combine the answers to provide the right insight to the customers. So we force ourselves to decompose a lot, optimize every single block, and then combined the blocked.
Curtis: And that’s a really systematic, targeted approach, which I think is needed, which you’ve had great success with. That’s great. You started three companies now in data science.
Alain: Well I started to but close enough.
Curtis: Okay. So you started two, and you’re part of this other one and, ah, but you’re integrated and you’re integral to it. Can you tell me a little bit about the, the process of kind of ideation? Like how do you come up with the problem that needs to be solved, right? And then how do you, how do you really determine what that is and then build a machine learning or data science business around that concept?
Alain: That’s a good question. It’s um, it’s almost like a, you should call data science “data craft.” Because there is a, there is a, a craft components to it. So, first and foremost we talk to the customers and get a sense of the problems they have. And rather than talking to one customer at a time, we talk to multiple customers at the same time, evidently not in the same room.
Curtis: Sure.
Alain: And the idea is we don’t want to do, there’s a lot of model factories out there and there’s a lot of consulting shop. That’s not who we are. We bring a software platform. So what you have to do is you have to listen to what five or 10 or 20 customers, would be customers, are telling you step back and say what’s coming and what’s different. And it’s not that you’re going to build to what’s coming and ignore the rest.
You just have to handle it differently. And so we think in terms of objects about the classes of problems and classes of classes and so on. So this is very systematic approach about where things could fit. And for instance, if you boil the ocean or any ocean, you’ll realize that there are two types of business models, either you sell stuff or you sell subscriptions, you sell, you know, you sell a car or you or you sell a credit card, you sell a loan or you sell, you sell a telco service. So there are things you buy every now and again and then there are things you subscribe to and that sounds maybe irrelevant. But when you look at it says, “well, can we map all our solutions into this superclass of business model and the like?” And so what this drives you is, it drives you to figure out what are common questions and thus what are common answer frameworks you need to develop and, and it creates a, a, a catalog of questions.
And the trick and the focus is adding one more question to answer is always easy. The difficult part is what question can I remove and still providing insight. So we try to reduce thing quite a lot. That’s the top down as approach of the roadmap. The bottom up is to create an environment where ideas percolate and a type at time. We stimulate those ideas using well known processes. There’s a process called TRIZ, t, r I, zed, which was a came from the late Soviet Union, which is a way to systematically think about problems. And we, we’ve been using that as a way to spur the thinking. So we, we, we take the top down hierarchical view, we take the bottom up, let everything emerge, and then we filter it through those first principles that are absolute go/no go gates for us. So if you’re not able to explain why the insight is the way it is, we do not accept that technology.
That means we lean very lightly on deep neural network, convolutional network, and the like. They can be part of a solution but they cannot be the solution because you are not able to explain why the insight is the way it is. And some of our customers are banks, some of our customers are doing loan decisions, and they have regulators, they have internal regulators, they have external regulators. So you have to be able to explain why the decision was made the way it is. So there’s a top-down hierarchical structure is stimulation of ideas and then the filter through these gates. The other approach, which is maybe unique to us is one of what I would call forced pain. A very good friend of mine who was my boss, we became a friend. Had these great ideas to make the product manager of a project to be the quality manager of that project as well.
Quality is always the one who scheduled gets shrunk at the end because everything has moved to the right, but you have a ship date and you will meet the ship date. And so what it did is that it forced the product manager who is writing the Spec to not put one piece of spec, specification, that was unneeded because he knows that he or she would suffer at the end. And at the time I thought what a dopey idea. And now stroke of genius. So borrowing from Abdul’s idea, we created a group and the group was in charge of doing the core architecture of our orchestrator. You know, some people would call it a pipeline, we call it an orchestrator because it much richer. The people who are in charge of putting this common architecture together are the people responsible for doing the data engineering with our customer.
That means if they do it right, they don’t, they don’t suffer. If do it wrong, they do suffer. So the the the the person in charge of it, fantastic guy called Chris, at time he goes a little schizophrenic because his role is both to onboard and architect. But what this has forced us to do, to think about is, and it’s again this kind of funneling approach of forcing thing to go through as few gates, as few funnel as possible is to say, “this problem is only gonna show once or twice. We’ll do it by hand. This problem is going to show up 90% of the time. We need to automate it, and we need to automate it so that the artifacts that get created from our modeling process is common,” and this is paying off, I mean the speed at which we’re able to onboard data in the same vertical market from one customer to the next, we go let’s say the second time it’s about 30% of the amount of time.
The third time can be 10% and it comes from forcing everything to go through the same funnel, the same framework, and we have an other people have a fair amount or an unfair amount of data scientists, data engineers, database architect and the like. The approach we force ourselves to to do is force everything to go through few funnels because it will help in reuse and it will rarely flag what problems mattered to customers on which one do not and focused on one problem that was not obvious to us early on is what now we call data debt. If you use a . . . if you use, let’s say a table 0.5% of the time and the results don’t impact things when find a way to jettison that table that that our source, sorry, that are sourced from being used systematically because when you’re going to go from one system to the next and what have you, if you have to drag this entire legacy of data and mixture of things align, it cost us time, but more importantly it cost the customer time and resources. So in a time where the IT departments of companies are stretched, the idea of being careful about what you use and don’t use is one that, you know, we cook into a modeling technique. We cook into our in into our software. Our approach is If we can get the same answer, with 10% of the data that the other ones are using, we looked that as a plus show. For us, good data is better than big data.
Curtis: That’s a great perspective. Now we’ve talked a little bit about structure, right? Structuring teams. You’ve talked about pain points to make sure all this works. Can you tell me a little bit about how you build your team? Like how do you hire the right people? How do you make sure right? Cause data science right now is, is, is hard to hire for because there just aren’t enough people. So how do you approach that?
Alain: Uh, first of all, I don’t, I uh, I’ll have a contrary view. I don’t think that hiring data science is difficult because you have a lot of, you have a big pool. But finding, uh, finding the right data scientist is a challenge. So we, we are pretty maniacal about our recruiting. We recruit on campus a lot. A lot of our people on both the software data science teams first job over a second jobs. So we have a fair amount of folks who are coming from school and few, all things considered, senior folks that we have that come later . . . from later in their career. The team is right now 26 data scientists. Uh, I have 11 PhDs. Most of the people are graduate degrees, and we take them through a very systematic interview process. Probably a test when needed. Two technical interview, one in shore interview and then myself, and what we seek besides, ah, ability to deal with database skills and coding and what have you. Although we’ve got some people in data science that don’t know how to code (that’s an interesting thing on its own because we do heavy duty math) is the ability to be curious.
So curiosity and the willingness to experiment and fail and not freak out about the implications are failing and experimenting is something that we look first and foremost. We look into the school pedigree to a large degree. We look at the GPA even if you’ve been out of school 10 years, so the interview process can be a very, is a very systematic one. We . . . I think . . . we check, I check the statistics recently. We had about 260 PhDs, for instance, that applied. We’ve hired 15. There’s 11 now, so four of them. The pace at which we’re doing that didn’t work, and what we try to make sure is there is a mixture on teams of folks who have an heavy theoretical background as well as a practical background. To build on their curiosity, we do not always hire from people with a math or machine learning or statistical background. You have to have done some of it. So for instance we have someone who has done a atmospheric climatology research, one that has done a high level particle physics, one that has done quantum mechanics—crystals and things of that nature. And even within the more traditional fields of mathematics statistics and the like, it’s pretty varied and the idea is bring new way of thinking and being . . . feel comfortable that a totally new approach to problem solving will, will come about.
We have one guy on the team who was part of the team at CERN that validated the existence of the Higgs boson. Whenever he, whenever he makes a presentation, my question is will I get lost on slide four, slide five, and I know I’ll get lost. Others are ah super strong relational database oriented folks, and it’s like let me show you the simple ERD diagram and again I go, “Okay, which table matters? That one? No, well that one. Yeah, and I knew that. And so it’s a, it’s a mixture of things that . . . what’s important from a, from a a recruiting point of view and then early work is just about everybody can have a look about who is hired and some cases . . . I’ve got a gentleman we studied two weeks ago, man is barely getting his feet wet and he is already interviewing to build up on his team. So because we are going to work very hard, we work very long hours because we’re going to work as a client, we’re very specific who comes in. There is pretty much a veto process from anyone who is in the hiring process. So I might like a candidate a lot, and in some cases I will be the first one talking to them, so at times I get overly excited. My God, I got the CTO talking to me. I might be in good shape and I’ve got to tell them I’m the first person because I could schedule you, but you’re going to talk to some of the data scientists and senior data scientists on the team. If one of them, I don’t share that with them, but I kind of explain everybody has to agree.
Curtis: Sure.
Alain: But if, if one of them says, “you know what, Alain. I know you like this person, but he’s just rambling too much with explaining performance stuff. I think it’s a no.” Then I signed a sorry, but we’re going to pursue it with someone else emailed to that individual.
Curtis: Got it.
Alain: But it is, it is true. And in the DC area, we’ve seen that in the Toronto area. We’ve seen that as well. DC because Amazon decided to build its headquarters next to Jeff Bezos house. Thank you very much. Uh, good news. My house value went up. Bad News is now not a salary. I’m competing with are higher, so there is, there is upward pressure for sure. Which makes it a little complicated, but so far we’ve been able to manage.
Ginette: Thanks again to Alain Briancon for being on the show. Again if you’re looking to invest in your company by training your employees, head over to datacrunchcorp.com. And for our show notes and attributions, head to that same website.
Attributions
Music
“Loopster” Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/