Slaughterbots. Predictive policing. Your computer suddenly showing you ads for something you were speaking about hours before. Malcolm Burnley tells us all about it.
There are lots of (read: way too many) podcasts in this world. But one that has really caught our attention of late is the new NPR podcast about artificial intelligence, AI Nation. (Listen here.) On the show, co-host Malcolm Burnley, a former Philly Mag staff writer, breaks the controversial and ubiquitous technology down in a way anybody can understand. We tracked down the 31-year-old Society Hill resident to learn more.
How many times in the course of a day is the average American encountering artificial intelligence?
The entire experience of the Internet is so informed by AI. Autocorrect on our phones is one of the biggest examples. Our web searches. E-commerce. Things that pop up in our feeds. Since the Capitol breach, there has been a lot of conversation about how AI and certain algorithms point certain people in different political directions and down various rabbit holes. And then in the physical world, there’s a lot more facial recognition going on than we’re aware of.
Give me an example that you’ve dissected on AI Nation.
Predictive policing. One story on the show is about Nijeer Parks. He wound up in jail for 10 days based on a bad facial-recognition match. He didn’t even match the physical description of the person they were looking for. There were so many red flags that warranted further consideration. But instead, he went to jail. There is no transparency with police departments that are using this technology, and we only find out about it if they mess up.
For sure. The other facial-recognition stuff that’s frightening are these companies like Clearview. Supposedly, close to a quarter of all Americans are in their database, and what they can do is run a photo through their system and tell you who the person is, what their online profiles are. Only clients of Clearview can get access. This can just be a photo somebody takes with an iPhone.
My sister still won’t get EZPass because she’s worried about the government tracking her. Yet she has an iPhone. The government is already onto her, right? She should just get the EZPass?
[Laughs] I think it’s fair to say that companies and the government have her information. With the government, the pandemic response in South Korea was a glaring example. They required everyone to download an app and share data with the government. As dystopian as that sounds, they were able to do contact tracing at this hyper level. They didn’t have to shut down their whole economy. They knew locations, financial transactions. They had closed-circuit footage. I talked to people in South Korea who said that when they were told to quarantine, if their cell phone didn’t move for a few hours, someone would knock on their door to see if they had just left their phone at home so they could go out.
Okay, so my wife and I were speaking about a type of product while we were sitting on the couch. Nobody was searching for anything online. The next day, I’m getting ads for the thing. No, we don’t have Alexa. Please explain. Is this just a coincidence?
It’s definitely not a coincidence. It’s probably your phone. Siri. It’s happened to me, and it’s jarring and shocking. You probably gave some app on your phone access to your microphone. Who knows?
Right. Could be the phone. Could be the laptop. Maybe the cable remote that you can talk to. It’s scary. Is there any regulatory body dealing with all of this? Some agency whose mission is to prevent us from suffering the fate of the astronauts in 2001 or humankind in Dr. Strangelove?
No, but I believe there will be soon. There have already been hearings in Washington with U.S. Senators and Representatives asking deeply skeptical questions. There have been data breaches. Then things like the Capitol breach. I think some big thing will happen that will trigger a call for more regulation, the way that 9/11 triggered the creation of Homeland Security.
Was there some point when you were researching for AI Nation where you were just like, OMG, I can’t believe they’re doing that? Like some drone that can scan my face and then shoot me with a laser?
There’s a video online called Slaughterbots [embedded below] that is essentially what you just described. It’s a fictional video put out there in 2017 by this group trying to regulate autonomous weapons. The capability exists now to have a tiny drone identify somebody by facial recognition and then kill them. There’s no evidence that it’s been used yet. The world of autonomous weapons is so, so scary to dive into.
But with drones, you actually have a human behind the trigger, right?
Most examples are some mixture of AI and humans having to decide to pull the trigger. But one guest we had on AI Nation was Laura Nolan, who quit Google. She was an engineer and quit over this controversial project with the DoD called Project Maven, a kill-chain project that would get computers a lot closer to pulling the trigger themselves.
Just to be clear, though, there aren’t any drones doing this now. But there will be in the future.
There’s a Turkish military company that makes something called the Kargu. It’s much bigger than the slaughterbot you saw in that video. It can go out on its own and ID someone and kill them. The technology has been confirmed. We don’t know that it’s ever been used.
But it’s not all bad, right? Artificial intelligence can help me, too.
Absolutely. AI will really help us so much in a future pandemic. We’ll be able to end a pandemic years from now in a matter of weeks using AI.
“AI is already causing and will inevitably cause a lot more inequity. The rich get richer. The poor get poorer.”
Anything more practical, for right now?
The most impressive example of AI that I’ve found is regarding what’s known as the protein folding problem. It was something that scientists were trying to solve for 75 years. Some scientists spent their entire careers trying to solve this problem. Well, AI has solved the problem. It was announced last year. How AI solved it was just incredible. The scientists had been using the laws of physics. The computer invented its own form of physics on the fly. This has major impacts on fighting a future pandemic and finding cures for diseases.
So a computer figured out a problem that countless scientists had worked on for the better part of a century, which raises an important question: What do we do with all of the people whose jobs have been taken by AI and whose jobs will be taken in the future by AI?
I’m so happy you brought this up. AI is already causing and will inevitably cause a lot more inequity. The jobs AI tends to take are low-wage jobs. AI will tend to replace women and people of color. The rich get richer. The poor get poorer. Just like in the Industrial Revolution, when inequity skyrocketed. I don’t have the answer. And it’s scary to think of the economic impact and mental health impact of people not being able to work or having jobs lost. We need to make sure we design systems with those people in mind.
Okay, last thing. Some predictions. Drone deliveries en masse. A robotic car driving me to work. How far off is all this stuff in reality?
AI is really good at looking at data, finding patterns, and predicting things, and less good at moving through the physical world. I’ve talked with autonomous car experts, and they said the marketing around those vehicles has greatly outpaced the technology. Most people think we’re still at least a decade away from seeing self-driving cars regularly. And you’re going to see drones repairing bridges and doing other tasks away from people more than seeing them coming through residential neighborhoods. What we’re going to see a lot more of are things like smart earrings and smart necklaces and other devices tracking our biometric data and sharing it with a lot more people. Your doctor. The government. We’re going to see our data increasingly getting sucked up. AI can help improve things, and to do that, you have to let it run loose. But what we don’t know yet is how much it will mess up things.