We talk with Ben Jones, CEO of Data Literacy, who’s on a mission to help everyone understand the language of data. He goes over some common data pitfalls, learning strategies, and unique stories about both epic failures and great successes using data in the real world.
Ginette Methot: I’m Ginette,
Curtis Seare: and I’m Curtis,
Ginette: and you are listening to Data Crunch,
Curtis: a podcast about how applied data science, machine learning, and artificial intelligence are changing the world.
Ginette: Data Crunch is produced by the Data Crunch Corporation, an analytics training and consulting company.
It’s becoming increasingly important in our world to be data literate and to understand the basics of AI and machine learning, and Brilliant.org is a great place to dig deeper into this and related topics. Their classes help you understand algorithms, machine learning concepts, computer science basics, and many other important concepts in data science and machine learning. The nice thing about Brilliant.org is that you can learn in bite-sized pieces at your own pace. Their courses have storytelling, code-writing, and interactive challenges, which makes them entertaining, challenging, and educational.
Sign up for free and start learning by going to Brilliant.org/DataCrunch, and also the first 200 people that go to that link will get 20% off the annual premium subscription.
Curtis: Ben Jones is here with me on the podcast today. This is a couple months coming. Excited to have him on the show. He’s well known in the data visualization community, he’s done a lot of great work there. Uh, used to work for Tableau. Now he’s off doing his own thing, has a company called Data Literacy, which is interesting. We’re going to dig into that and also has a new book out called Avoiding Data Pitfalls. So all of this is really great stuff and we’re happy to have you here, Ben. Before we get going, just give yourself a brief introduction for anyone who may not know you and we can go from there.
Ben: Yeah, great. Thanks Curtis. You mentioned some of the highlights there. I uh, worked for Tableau for about seven years running the Tableau public platform, uh, in which time I wrote a book called Communicating Data with Tableau. And the fun thing was for me that launched kind of a teaching, um, mini side gig for me at the University of Washington, which really made me fall in love with this idea of just helping people get excited about working with data. Having that light bulb moment where they feel like they’ve got what it takes. And so that’s what caused me to really want to lead Tableau and launch my own company Data Literacy at dataliteracy.com which is where I help people, you know, as I say, learn the language of data, right? Whether that’s reading charts and graphs, whether that’s exploring data and communicating it to other people through training programs to the public as well as working one on one with clients and such. So it’s been a been an exciting year doing that. Also, other things about me, I live here in Seattle, I love it up here and go hiking and backpacking when I can and have three teenage boys all in high school. So that keeps me busy too. And it’s been a fun week for me getting this book out and seeing it’s a start to ship and seeing people get it.
Curtis: Let’s talk a little bit about that because the book, it sounds super interesting, right? Avoiding Data Pitfalls, and there are a lot of pitfalls that people fall into. So I’m curious what you’re seeing, why you decided to write the book, how difficult of a process it was and then some of the insights that you have in there as well.
Ben: Yeah, so I feel like the tools that are out there now are so powerful and way more so than when I was going to school in the 90s, and it’s amazing what you can do with those tools. And I think also it’s amazing that it’s amazing how easy it is to mislead yourself. And so I started realizing that that’s sometimes what I would be doing. Right. Or I thought I’d come to an amazing answer to a data question only to find out through additional exploration and discovery that my first answer was actually wrong, based on something I didn’t understand about the data a way I was thinking about it that probably wasn’t really copacetic with reality. Um, and so that’s why I wanted to write the book. I mean, I was hearing everybody talk about pie charts and how bad they were, and that’s really, it seemed like all anybody really was saying for a while there, and I was thinking to myself, “well, Oh, I could get away with putting a bar chart together and no one would criticize it because it’s not a pie chart.”
But then I stopped to think about all the things that could have gone wrong just to getting to the point where I was creating a chart at all in the first place. And so I think that was part of the goal also. I was, you know, as I mentioned, hiking and backpacking and just coming across a lot of, you know, funny warning signs out there on the trails and realizing, “well, what’s the warning signs we have in data?” And so that’s kind of part of the genesis of the, you know, the idea anyway of the book. But it started a long time ago. I started writing it, uh, back in 2015 and so, uh, it was a bumpy road getting it done and I’m, I’m glad to finally be there.
Curtis: Yeah. So you mentioned things that go wrong before you even make a chart. Right? So moving beyond pie-chart hate, which ubiquitous in the data community, right? And so are you talking more, did a cleansing, data pipelining you know, what’s the focus here?
Ben: Yeah, totally joining datasets together, cleaning dirty data when the values aren’t what you think they are. Um, even sometimes just the way we think about data, you know, in, in, in the sense of, um, well, you know, well here, here’s a quick little story about that. One time I was at a luncheon in Seattle, not too far from where I worked in the little town of Fremont in Seattle. Really cool neighborhood. And there’s a bridge not too far from the Tableau offices there and it’s a, actually the drawbridge called the Fremont bridge. And the Drawbridge, it actually goes up more than any other bridge in the entire country cause it’s really old and pretty low. And every little sailboat causes it to need to raise. But there’s also a bike counter on the bridge and a counts bicycles going in both directions. There’s one on each side of the bridge and the Seattle department of transportation is trying to encourage bicycle ridership, you know, around the city.
And so they put that out there and they put the data out there and you can go download it. And I did. And I was at a luncheon and I put it up there on the screen. I was not super well prepared, so I didn’t really know what I was going to see. And we put it up there and you know, lo and behold, there’s this huge spike in a timeline showing almost like two or three times the amount of bikes crossing the bridge on. It looked like one day. Turns out it was actually two days and so no one in the room knew. But we were having a conversation about what it could be. And everybody came up with interesting theories like, well maybe it was a bike to work day, or maybe there was some race or you know, all these interesting theories. And then I didn’t know and they didn’t know.
So we moved on and then a couple hours, a couple of 20 minutes later, maybe you are so there about someone in the back of the room, raised their hand with their phone and said that they had found out and so they researched it and discovered through a conversation with a blogger and a employee at the Seattle department of transportation that the, the counter on the one side of the bridge malfunction had something evidently to do with the battery. They replaced the battery and then it went back to normal. They actually adjusted the numbers. So if you go to look at that data set today, it’ll be replaced with some average. I’m not quite sure how they figured that out, but the point that just hit me in the moment was just that nobody even thought to question whether the data was real or not. And I think that we get our hands on data and sometimes we think it is reality.
And so that was just a good healthy reminder to me that, you know, most data sets that we deal with are imperfect just like we are. And so there’s some gap between data and reality. And if we don’t think about that, you know, again, you know, we might go make a perfectly compliant best practices bar chart or line chart or what have you and come to some conclusions and run off and make some decisions. And it wasn’t about our chart choice, it was about the fact that there was an issue with the underlying data. And so those are the kinds of things I try to talk about in the book just to shine the light on some of those other possible issues that are always lurking and may not have anything to do with the chart choice. So that’s an example, I guess of one of the upstream problems that might occur way before you even think about what chart you want to put the data into.
Curtis: Right. Looking at the data integrity and then how do you even check for that? Right. I mean part of it is looking at it, once you’ve made the chart, you kind of do a, you know, does this even make sense? Like is the data that we’re looking at within the bounds that we might expect it to be or not, right? I mean that that might be something that you can do. What strategies do you have for people to kind of think through these things and make sure what they’re doing makes sense?
Ben: I think visualization actually is a great technique and a step that helps. Like you said, right? It’s, it surfaces some of these quirky aspects of the data. It also helps us to see that maybe what we’re looking at is a little bit different than what we expected and that might be something, a good learning point about the data that it’s actually showing us. It also might be flagging us for an issue with the data. Maybe it’s an error, maybe it’s so I think visualizing the data is itself for a human being. That’s just exploring data, right? Looking to discover what’s there. Visualization is a massive step. Also, I like the idea and I’ve incorporated this into some of my trainings that it’s good to, I call it a friend of mine, Michael Mixon use this phrase that has always stuck with me. It was about four or five years ago that he said it, but he talks about exploring the contours of your data and that to me has been such a, an important phrase that stuck with me because it sort of speaks to that need up front to just get a sense of what’s there.
Some people would call it profiling their data and you just want to kind of get a sense of what’s there, what are the mins and maxes, what does the shape look like for the variables? Do you see any nulls in there? And you know, what are they and why are they there? Before you start answering questions with that data, just spend even maybe just five to 10 maybe 15 minutes, just kind of walking around it, almost like you’re walking around the perimeter, you know, and getting a sense of what’s there. And that can be a great way to surface not only possible issues and problems with the data, but, but even also maybe actually would help you find a more interesting question that it was more because you didn’t know what was in the data you or your original question and reason for looking at it in the first place might not even be the most important thing or the most, you know, relevant thing to you. And you might amend your inquiry based on that, that assessment up front. So I like to do that and I don’t want to burden people with some two or three hour exhaustive process. I think there’s some, so I have a little of 10 steps to explore the contours of your data. Try to get through it quickly. But also, yeah, like get a gut check. You know what’s there. That’s, that’s one thing I try to encourage people to do to avoid many, many pitfalls.
Curtis: Got it. And are these the kinds of things as well that you teach in your data literacy courses like pitfalls or do you get more into a, I dunno like w w how do you define data literacy in your terms?
Ben: Yeah, good question. I mean the training is definitely, I would say, you know, littered with these different pitfalls throughout, but it isn’t like a focus of it. It’s really just maybe here or there in a section of the training, I’ll, I’ll just kind of put a little warning sign out there and say, “here’s why this is a potential pitfall.” But the training itself is about a little more than that. It’s also not just what not to do, but the training is, you know, more like what to do. And so at present, I’ve got a couple of levels. So my first level, and I have to credit my friend RJ Andrews, the author of Info We Trust for encouraging me to do this last year before I launched, come up with a program that just teaches people how to read and interpret charts and graphs. You know, that other people have made, there’s so many people that are being bombarded with dashboards and charts, whether they’re talking about their own finances and health or in their job, but their sales meetings or you know, even talking about reading the news, right?
So we come across these visuals all day long. How do we make sense of them? What kinds of questions do we ask of them? What sorts of things should we watch out for that are common, you know, ways in which people misinterpret them. And so that’s my, what I call my level one program. And so that’ll let you know, definitely in there there’ll be a couple of sections about your “Hey, watch out for, you know, the popular ones. Like what if your Y axis is truncated” or you know, those sorts of things we all kind of know is, you know, very typical ways to um, just get the wrong idea from a chart or graph. And then level one, level two is kind of the next level. I hope to get as many people there as I can. And you know, I think level two is a tough one.
Really. It’s all about working with raw data. So what happens when, if all we do is read charts and graphs of other people have made, then I think that’s probably not enough to become what I would consider highly data literate because in a sense we’re all still at the mercy of that person who created it. You know, did they, how did they process the data? Did they, are they thinking clearly, you know, do they have some agenda? Not to be paranoid, I’m just saying maybe they’re not trying to purposely mislead us, but you know, maybe they got it wrong. Right? So I think that that next level, level two is all about exploring data, finding ways to prepare it for analysis. Actually doing that. EDA, uh, you know, uh, exploratory data analysis, communicating your findings to other people. That’s all part of the level two program right now.
So that’s kind of where I’m, that’s the offerings I’ve been putting out there. Uh, up until now I’m actually working on a level zero program and know what to call it. It’s been so interesting to me that there’s been this demand like on the, I thought I was expecting everybody would say, “Hey teach us machine learning and AI cause it’s the buzzwords.”” I’m finding the opposite. People are saying, Hey Ben, can we just do something where you just talk about what is and how does it apply to me? And so I said okay so you don’t want me to talk about charts and graphs and how to read them. No, just help them understand like what data is, how did it get collected, show some examples, help them get comfortable with the fact that data is useful and helpful and can apply to them. And you know, basic concepts like variation in a certainty.” So that’s kind of a part of a, I don’t know what else to call it. So it’s like level zero program that’s currently in development. Yeah. I just didn’t see that coming. It’s been fascinating.
Curtis: That is interesting. Who are the people that are asking for that? Are these like managers trying to, to have their team, you know, Hey team and like let’s, let’s find insights from data and I want them to realize that this is actually valuable. Like, like who is asking for this kind of thing?
Ben: Yeah, like exactly. Large organizations that are, you know, trying to essentially discovering like I did when I was in the evangelism role at Tableau, talking to people that there’s just a lot of people that have feel they’re left behind with this entire data revolution. They don’t feel like they have the training or education. They just had the sense of this not being their thing and feeling on the outside looking in and they’re trying to figure out how to resolve that. And so part of that is looking at it and saying, “well, there are some individual individuals that we just need to start at that level.” And so it’s mostly, I would say either like a training director at a big company or maybe someone in IT who is running a business intelligence team who’s responsible to create content for a large number of people. Right.
And they’re all already looking out there going, well I think we just have to deal with this first question first, which is, you know, making sure people are comfortable with the basic concepts. And um, yes, it’s really in some ways, you know, just kind of making up for the fact that when a lot of us went to school, there were no programs like this. There was even maybe no appreciation that data was going to become this important. And so much of us really didn’t get that. Even for me having an engineering degree, I learned very little about visualization and statistics and those kinds of concepts. It was not in the curriculum back then in the nineties, you know, I went to UCLA. So even, you know, really, you know, decent through a really good school. So then what about people with liberal arts degrees? You know, so we need to make sure that they feel like this is also something they can contribute to this dialogue that’s happening. And so I think that there’s just this need to start with square one.
Curtis: Yeah. So how do you tackle that? I mean that’s a very ambiguous question, but I get this as well, right? It’s like well show me how data’s valuable. Right. Alright, so, so how, how do you approach that?
Ben: That’s the one I’m still trying to figure out, ah, to be honest Curtis, right? Like I’m actually asking myself that exact question right now. How do I get someone excited about the notion of working with data and how do I help them feel like they can have the confidence to contribute? I think that while I would say a couple things maybe one thing I learned from Tableau public is that if you incorporate other non corporate-y topics, it’s actually good. Like they’re actually already working with data in different ways. They haven’t thought of it being quote unquote data. Like, you know, managing their own family finances or planning the trip to Disneyland and how are they going to get around the park and you know, what are the wait times. And so in other words, figuring out and helping them figure out that actually this isn’t something that’s totally different from the way they’ve been thinking and making decisions, you know, uh, in some ways and, and maybe those ways aren’t relative to their employer in the current moment.
Maybe those ways are more relative to either their own family, their own life. Maybe it’s at a community level trying to help out with a school fundraiser and trying to, you know, those sorts of things. And just, I think that that is a part of it. I think that also obviously companies are going to want them to know that they can do that with their company data as well. And so I think it’s also blending that in. Like I said, I think Tableau public taught me that if you bring in those other, um, th those topics are fun, right? They make it interesting. They’re not boring. They’re things people can connect to. And relate to. And so I think that that might be a way I’m gonna have to learn. You know, I’ll get back to you on that, but I would love, it sounds like this is something that you kind of have pop up too. So you know, what’s your take? Do you have any, do you have any ways to get people engaged if they feel like, I call these people data phobic, right? They’re like, “Whoa, I don’t like data. It’s really,” maybe they think they’re not good at math. So what do you think?
Curtis: In some ways it’s maybe the Google maps concept, right? Where the data part of it fades into the background and you’re left with this application that just gives you what you want. But here we’re talking about having people actually engage with data and try to find insights and these kinds of things. And that’s a hard one. I mean, that’s a, that’s a hard one to, to help because it requires change, right? And a change from what you’re currently doing to something new. And uh, and that can be scary. So I think we’re all still trying to figure that out. But I think you’re right. I think the idea of a quick win is the key there, right? Show someone how it relates to them in their life and how it has some sort of benefit and do that as fast as possible and then they might be interested to learn more.
And yeah, that’s, that’s typically what we do with corporate clients, things like that. Um, so, so as you were talking about this though, just this may be an off the wall question, but I’m curious your thoughts on this. You’re, you’re on a mission, right, to, to help improve the data literacy of people, uh, of just the general population, maybe of anyone that you can talk to and teach. In your opinion, what would the world be like? Or how would it change if most people, or if everyone had reached, I dunno, level two of your course, everyone’s data literate. Like what does that look like?
Ben: Yeah. Wow. Well, that’s a great question. I mean, I think some of the public discourse would change. We would be able to understand how to have conversations that are based in numerical facts a little better than we do today. I think today when someone is interacting with data, let’s say about a political poll or an election or something about their state budget, they’re not walking away with a accurate understanding of what is happening. And so that leads to, I think, all kinds of problems in public discourse. And so number one, my on the optimistic side of me says that this would allow us to, to agree more, you know, and I mean, think maybe the pessimistic side says that, you know, now people would know how to perhaps use data to advance their own agenda in a better way than they do today. So I think there’s the good and the bad, right?
There’s kind of the, you know, I guess maybe the positive aspect of being highly data literate when it’s also combined with ethics, when it’s also combined with open-mindedness and a desire to collaborate with people. There’s also the negative side of being data literate, which involves, you know, perhaps, um, doing things that are, you know, not exactly on the up and up. I think though that in a world where people are highly data literate, well I guess it would be harder to pull the wool over people’s eyes maybe. Um, so I would hope anyway. And that’s one thing that I would like to see. I feel like I honestly, I feel like this is more than a generation away though. You know, it’s just not, it’s just dealing with concept. Maybe that sounds pessimistic in a sense, but I just think that there’s ways that we think that are still not quite evolving or have not evolved to that degree.
And it isn’t like we can’t change this overnight where everybody suddenly embraces and gets how to work with data at a very deep level. I kind of see it as, and this is maybe a more hopeful way to see it, is that we’re, you know, in some ways you can think of of our current generation is at a very young age, maybe a toddler age in terms of the overall species ability to work with large amounts of data. And I like to think we’re in the process of building up an immune system to some of those pitfalls, which is the book. And then also maybe, you know, kind of also learning some good habits and techniques to help, uh, the next generations look back maybe. And you know, how we look back and think of, I don’t know, think of something like bloodletting or, right. I don’t know.
Ben: Uh, I remember when we were kids in high school learning about these things that people used to do in the 17 hundreds and a shaking our heads and wondering how in the world they were so dumb. And I like to think actually that in a few generations they’ll look back and go, “yeah, they didn’t even understand correlation and causation or how can they not get that,” you know? And maybe there’s a way where we can get there and we, you know, maybe what we’re doing is sort of, you know, helping lay the groundwork for that. So I hope so.
Curtis: The day when poor statistical choices is akin to bloodletting. That’s the, that’s the, I.
Ben: They didn’t even know what P values were. What, or they used P values. What? So we’ll see. Yeah, I’d love to zoom ahead and see what that world looked like. But I think there’s other, there’s headwinds though.
There’s some research that shows that, you know, you can show people data all day long. They’re only going to believe own preconceived notions and agendas. So there’s ways we think that I think are problematic that we’ve got to get at that as well. More than just learning how to use this or that software package. Do you know what I mean? Like there’s other more core kinds of human thinking problems and human, not just that’s societal problems in the way we interact with each other that need to be shifted before we become, I think a highly data literate species maybe as a whole. Um, and so what I don’t want to see though is that there’s the halves and a have nots in data, you know? And so I’m looking at some different programs to try to figure out how to blend and get data literacy into the hands of lots of different people.
Maybe people who wouldn’t traditionally have the ability to learn this sort of a thing because that’s, we don’t want that to happen where there’s only a core handful of people that have the keys to all of the data knowledge and other people get left out. That’s not going to be, I think a productive course for us to take. So, so then what are we doing to find the people that are shut out? How are we going and welcoming them into this? Um, really this data community, you know, across lots of different sub tools and subgroups and how do we bring them in? Um, so that’s a question I think that a lot of people are looking at today, which, which is good,
Curtis: Right. Yeah. And have you found some interesting ways to do that? I mean writing the book is one, have some interesting examples?
Ben: Yeah, I think there’s some amazing, um, programs that are promoting STEM to girls and women. I think that’s awesome. I had a chance to do a little training program with a company called Reachire, R. E. A. C. H. I. R. E. so like reach and higher put together, which is one H in the middle reach higher. So what they were doing is training some stay at home moms that were getting back into the workforce. They were going to work for a large telecom company. And you know, they, they brought me in to do a little training session for them to bring them up to speed on what has happened with some of the tools since they were in the workforce maybe five or six years ago. So those sorts of things. Um, I know this University of Washington, I still work closely with them and they’re doing programs right now to try to sponsor and do scholarships for lower income people that would like, they would like to help go through some of their certificate programs.
Um, there’s even a program I’m looking at right now with someone out of that same, um, university that’s trying, it’s called Bridging the Gap where it’s really all about finding ways to bring data training to migrant communities. And so those are some things and I think again, it’s to chip away kind of a thing. You know, it doesn’t like it. So you don’t just do one program and all of a sudden it’s all koombaya. I think we have to work at it and work at it and then get up the next day and work at it again. And you know, it’s something we just got to keep, be persistent and um, it’s going to have to take time but we’ll get there.
Curtis: Right. I want to touch on this point. You said a lot of this when we’re trying to help people do better work with data, some of it is, is a data issue and some of it is just how we think. And so I’m curious, in your data literacy courses in your book and just in your work in general, do you tend to focus more on sort of how we think and, and correcting that? Or is it more how you actually work with data?
Ben: I think it’s a blend of the two. I would say it’s probably 80, 20 is the working with data. 80 20 is the thinking part of it. But I try to do the thinking stuff up front. So the very first pitfall of eight is all about epistemology. This idea of, you know, philosophy of how we think and how we, um, understand data. So that one includes pitfalls, like, you know, how we rate things or the inductive fallacy and you know, the white swan/black swan ideas. Um, this idea I talked about a minute ago, this gap between our data and reality. So I try to address those early and upfront before we dive into it. And so, like, for example, in a five-week course that I’m teaching online, that first week is really kind of all about those sorts of things. Right. What are the questions you’re asking of your data and is that a reasonable question to even ask and what are examples of questions, like a lot of times people think data tells them the answer to the question why, but historical data very rarely answers why it just points to patterns that you need to go investigate in other ways to get that, I think, that question answered, right? So those are ways I try to address that a little bit up front, but then I don’t want it to be a big class on philosophy, so we then kind of roll up the sleeves and start trying to see how you, you ultimately you learn the power of those sound ways of thinking when you actually work with data and realize how you can use your mind in either productive or counterproductive ways. As you dive into a dataset.
It’s hard to just talk about it in the abstract, right? It’s actually easy to ease, better, I think to talk about the idea and the concept. Let’s talk about say like falsifiability. The notion that it’s really important for us to state our hypotheses in a way that it’s possible to prove it wrong. I think it’s a really powerful idea that came out of the twentieth century Sir Karl Popper, and so a lot of times when we’re diving into data, we’re just trying to prove ourselves right. We’re just trying to get data that corroberates everything we’ve always been saying the whole time. And so that’s human nature. So then just trying to be aware of that we can watch out for that. Maybe then train ourselves to think about it differently. Maybe it’s better for us to be the ones to find out if our thinking was wrong or not, rather than to have that pointed out to us in a less than convenient moment. And so that’s just kind of flipping it upside down a little bit. And I try to help people of the need to do that and also to be aware of the ways their mind, my mind, continually that put me in the path of not doing the best kind of work when it comes to extracting insights from data and sharing them with others.
Curtis: Yeah. Yeah. That’s good. So we’re coming up on time here. Um, but I want to kind of give you just, you know, the last word here, anything that you feel is important or something you’d like to share with people about the, about the book, about pitfalls. Uh, what do you think? Yeah.
Ben: Um, well I w first of all, I mean last word for me is, thank you Curtis for having a chance to come on here and chat about it. Um, the one last thing I would throw out there is this one quote I put in one of the beginning chapters of the book, which is “we have to allow ourselves to be human.” You know, this is a tough process we’re learning. It is a bumpy road we’re on and we’re going to make lots of mistakes. And I think there’s a tendency in our world to try to cover up our mistakes to try to, you know, make it look like we know everything and we are, you know, a person that, you know, kind of has it all together. Maybe social media and the highly curated version of ourselves we tend to put out there. And I’m trying to kind of counter that and tell people, “Hey, it’s okay to say I got it wrong.
In fact it’s actually better because then other people are going to know what to watch out for, your going to know what to watch out for. And that’s just part of being human.” So I’m trying to encourage people and trying to model that as a more than one self own I talk about in the book, I guess is the word we use these days. You know, something I did that was just a knuckleheaded I guess is maybe appropriate. But yeah, just kind of saying, Hey, that’s okay. Let’s just, you know, stop trying to kind of make this perfect version of ourself appear outwardly and that’s not going to help us get better at working with data that’s going to help us actually that’s gonna help us go the opposite direction. So I guess that would be the last word.
Curtis: Right. Well that’s good stuff. And how, uh, how can people find you reach out to you I’m assuming your book is on Amazon, let us know how we can get it, get ahold of it.
Ben: Avoiding Data Pitfalls on Amazon. Yep. You can find that there. Also, um, my website dataliteracy.com has all my contact info there. Um, and then lastly, I feel like I’m on Twitter way too much, so, but you can find me there @dataremixed and so you can shoot me a note there. I’m almost always going to catch that pretty quickly.
Ginette: A huge thank you to Ben Jones for being on the podcast today. For our transcript and links to attributions, head to datacrunchcorp.com/podcast. See you next time.
“Loopster” Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License