Robots...are awesome. They are.
Buuuuut, if there's one thing we can take away from every single movie ever made about robots, it's that we're totally f*cked if they decide to go rogue. The Terminator, War of the Worlds, I, Robot, Blade Runner, 2001: A Space Odyssey—literally every movie features humans getting their asses kicked. (I'm not like, expecting any of that to actually happen in real life, but I'm not not expecting it, you know? You know.)
It all got me thinking. Back in high school, there was a guy named Kevin Harrington who built bikes and would occasionally go by the moniker of Big Poppa Choppa, which is still hilarious. I was pleasantly surprised when I found out Kevin graduated from Worcester Polytechnic Institute with a Masters in Robotics Engineering and currently builds robots that perform invasive surgery and industrial tasks. Well, then.
I caught up with Kevin, seeing as how he's become an encyclopedia of knowledge about robots, and asked him about the feasibility of some of the most frightening robot movies. I received some comically disturbing answers.
Read on. I, and robots everywhere, dare you.
Okay, to begin, is The Terminator a realistic look at the future of robotics in America?
It is, actually. The Terminator is not only possible, it’s probable. It is in the works—it is an active project of the U.S. Military right now to have Terminator-style robots with artificial intelligence that may not necessarily obey commands. It is something they are absolutely working on—and almost done.
If you want a truly horrifying glimpse at the capabilities of a robot, take a look at Boston Dynamics and this robot they developed called Petman. They’ve delivered a bunch of these humanoid robots for DARPA [Defense Advanced Research Projects Agency].
The drones we hear about are the ones that aren’t classified. The ones in process right now are about as capable as the robot warships you see in the film—the flying ones. The U.S. Military has deployed boats, aircrafts, robot trucks, and robot helicopters that do shipping supply missions. They have the human frame to hold serious weapons, and these are the frames that are currently being developed.
We’re actually on track with a very similar timeframe as The Terminator—I believe it all happens around 2017.
Essentially you’re saying a robot holocaust is possible.
This is actually an older idea than robots; it’s called Roko’s basilisk and it's a thought experiment that says: 'Let’s imagine there’s a super artificial intelligence...something that is capable of thinking and acting and behaving all on its own and is way, way, way, beyond the average capabilities of people.'
We see the doubling of computers—just doubling and doubling and doubling. Here’s the thing: what if they reach human intelligence? They keep doubling every 18 months, but the interesting quirk is—once they’re at the level of human intelligence—they can begin editing themselves faster than that 18 month cycle. Then: boom. Off the charts intelligence. They’ll begin developing so fast that we won’t even be able to understand their intelligence.
The Terminator is not only possible, it’s probable.
Roko’s basilisk also says: 'Okay, now imagine one of those is evil.' If it’s evil, all-powerful, and bent on world domination...you’d have mere days between not possible and taking over. The first one that comes out, it's a coin-flip between evil and not evil. It is either bent on our destruction or not, and it’s a game of probability.
Here's where it gets awful: imagine this thing takes over and has the ability to probe into our minds and say: hey, did you help me come into existence or did you fight against me coming into existence? If you were against it, they'll bear down on you with an unfeasible amount of torture and suffering. On the other hand, if you helped this robot come into existence, then you’re free to go.
What causes this evil switch?
It literally is just chance, but it also depends on the utility function. What do you order [this robot] to achieve and how does it extrapolate what to do? If you remember the Terminators in that movie...their utility function was to prevent climate destruction. That was the thing they were set to do.
Robots will begin developing so fast that we won’t even be able to understand their intelligence.
The Terminators carried out their utility function by destroying all humans. You don’t necessarily know what the logic there is, because it’s a functional brain that operates with more input and more processing than any human brain ever could. So we can’t really even begin to fathom what thinking that way would be like.
So, we’re f*cked?
Once you start talking about A.I., it’s pretty much out of our hands. There’s no way of knowing which way it’ll go, but it’ll go. I think the more likely scenario is any sort of super intelligent robot that’s at that level is going to get the f*ck right off this planet.
Humans are not that hard to kill.
Because, why would you stay on a planet that’s full of corrosive oxygen and microbes and dominate it? Everything they need is out in the asteroid belt and the rest of the universe. Our number one saving grace is our significance. We have this very over-inflated sense of “us." I mean, our planet is important to us, but it wouldn’t be important to something that needs the biosphere to exist.
This reminds me a lot of War of the Worlds, where it’s the Earth's atmosphere that ultimately defeats the robots.
It’s the microbes, exactly. For a robotics system, it would be much more likely that they’d [go] closer to the sun or out to the asteroid belt, or off to another sun or a bigger sun. There’s an infinity of an infinity in every single direction. Humans don’t really think about what’s above or below our planet.
What’s the best-case scenario movie for robots?
Well, the other side of things is like Iain Banks and The Culture series. There’s a super intelligence, but they’re generally benevolent.The team I work with—Technocopia—is interested in something that more resembles the Star Trek replicator. On one side you have robots that do Aquaponics farming. In the middle, you have automation that takes the products from the farm for carbon-based electronics and plastics, and then automated manufacturing lines. The end-goal is all the necessary materials to support a human community. So, luckily, I think this more utopian scenario is far more likely than what you see in The Terminator.
I think we'll have a Star Trek society in 10 or 15 years from now.
That could be amazing.
Yeah, as long as we don’t f*ck with the robots. Roko’s basilisk only happens if we make it happen. It can go bad, but I don’t think that’s going to happen—I think we’re going to have more of a Star Trek society in 10 or 15 years from now.
Well let’s hope we don’t mess it up.
Exactly, and that’s the allegory of The Terminator—it’s not the Terminators that are bad, it’s the people who designed them. If we let technology exist for technology’s sake, we just have technology without humanity and that’s the lesson we should learn. Also—if robots really want to exterminate humanity, just raise the temperature of the Earth.
Instead of guns?
The least realistic part about The Terminator is how they do it—why don’t you just put a bunch of methane in the atmosphere or put a giant lens in front of the sun? We’re not that hard to kill.
Well, Kevin, you've scared us all to death. Thanks.
It was my pleasure.