Top Four AI Podcast Episodes of August and September 2017, Plus One of Ours
With October coming to an end today and Halloween candy in abundance, we want to give you something to munch on that’s a little more nutritious—a little brain food. Here’s the recap of our four favorite AI podcast episodes (besides ours, of course) from August and September:
Six Pixels of Separation
If you don’t know who Avinash Kaushik is, it’s time to meet him. He’s Google’s Digital Marketing Evangelist, and in this episode of Six Pixels of Separation, he describes how it’s a crime against humanity if your company doesn’t have an “AI army.” That’s intense. So what does he mean? As an example, he talks about how one dermatologist might see 200,000 patients in his or her lifetime, and there’s no possible way that dermatologist can pass on all of his or her expertise to a protege. With AI, however, it can “see” 200,000 patients in 90 days, retain what it learned about those patients, and pass on all its dermatology knowledge to succeeding AIs—making it the leading expert and sum of all knowledge on the topic. And there’s so much more in this episode. Go here to listen.
Note: With its profanity, this episode is not family friendly.
Jad and Robert return to a fascinating moral problem they explored in a prior RadioLab episode. They weave their previously discussed thought experiment into a moral dilemma that our society will soon be experiencing in real life as car companies program self-driving cars. The question revolves around this: should a self-driving car be programmed to protect its riders at all costs, even if that means several other people will die instead? Who will answer this fast-approaching question that must be answered? Listen here.
What type of map does a self-driving car need to operate on the road? An insanely more complex one than what’s been made for humans—a deep map. They need a high-powered, HD map that measures roads down to centimeters, because no one wants to tumble over a cliff based on even the slightest mismeasurement. This means that the robot maps must be 1:1. What does that mean? While maps have traditionally been some other ratio, like a 1:63,360 (where one mile is one inch on a map), this map would be the exact measure of the world, where one centimeter equals one centimeter. So what does it take to create a map like this? Lots and lots and lots of aggregated data from mapping and remapping the same areas. In this future, the mapping process would use every smart car on the road to continually remap the world. Check it out here.
The AI Podcast
What if robots could learn on their own and then teach other robots how to do those same tasks? Sergey Levine, an assistant professor in the Department of Electrical Engineering and Computer Science at UC Berkeley talks about what it takes to teach robots to learn on their own, which is key to speeding up the time it takes for robots to learn tasks. So how can engineers enable them to do this? One approach is they can teach them what the goal is but not tell them how exactly how to do the task. One example of this is instead of programing the robot to, say, pour a glass of water (an engineering feat that may not even account for every possible variation of water pouring), you can show the robot many examples of people doing an action and have it infer what the end goal is. Ultimately, the robot will reverse engineer what it takes to successfully pour a glass of water. Listen to this podcast episode here.
And here we’re going to mention one of our most popular episodes. Hilary Mason, one of the biggest thinkers in the data science space, gives us some very interesting insights while answering these questions:
- What are the backgrounds of your typical data scientists?
- What are key differences between software engineering and data science that most companies get wrong?
- How should you measure the effectiveness of your work or your team’s work as a data scientist for the best results?
- What is a good approach for creating a successful data product?
- How can we peak behind the curtain of black-box deep learning algorithms?
Listen in here.