Right now, saying the words “artificial intelligence” at the end of a business pitch is one of the quickest ways to fool a roomful of people into handing you money. The power of the phrase is so intoxicating that in 2016, at NIPS, a top academic conference for artificial intelligence, a group of friends sent out party invites from a company called RocketAI, claiming that the company was using something called “Temporally Recurrent Optimal Learning,” or TROL(L). They got press coverage, received more than 40 resumes from job-seekers, and wound up having to pay a police fine for too many attendees pressing to get in. It was all a prank.
The hunger for AI in business is so strong that Josh Joseph, chief technical officer for data-science company Alpha Features, regularly gives a presentation entitled “A Practical Guide to Conducting an AI Snake Oil Sniff Test.” He warns the audience to be on the lookout for “fuzzy power words” like “mathematical quantification,” “sentiment of social media” or any word along the lines of “thinks, knows, believes, understands, etc.” Basically, he warned a recent audience, “if they say anything about ‘cognition,’ or in any way claim to understand human thought, that’s a big red flag.”
The reason the industry falls for this sort of talk is because at this moment everyone can sense that the technology is teetering on the edge of transforming business. The phrase to understand is “machine learning.” That’s what’s happening at the moment. We’re building computer programs that can pick up new information and make smarter decisions with it. That can be fantastically useful. The programs can look at your past music selections and know just what songs you’ll like in the future.
AI will drive cars, fight off hackers, all sorts of amazing thing. But it can also be scary. Facebook had to shut down an AI experiment recently because the two chatbots they’d built started speaking a language to one another that the humans at Facebook didn’t understand. And even if AI doesn’t outsmart us and enslave all human life, machine learning is only as good as its data. If it observes us being racist or sexist or warlike, it’s going to keep serving stuff up to us that it thinks we’ll like. It amplifies our bad behavior.
Why is AI able to do this now? The algorithms are getting better, sure, and the processors are faster, all the raw horsepower of computing. But there is one fundamental reason that this is AI’s moment. Speaking at a conference in September 2017, Andrew Ng, the former head of AI for Baidu, and now the co-founder of Coursera, told the audience “the reason we’re doing so much better at AI now is not because the algorithms are so much better, it’s because the digitization of life is happening so quickly that we have lots more data to take advantage of it.”
It’s because a dumb machine-learning program that gets a lot of data is always going to outperform a smart machine-learning program that gets only a little data. And something like 90% of all the data that has ever been generated was created in the last few years, which means there’s so much more for AI to work with. The more data that an AI program receives — from our smartphones, our pictures, our emails — the better that machine gets at spotting our patterns and predicting what we’ll want to see and do and buy in the future. But whether we’ll be able to spot real the difference between real AI and impressive-sounding, empty technical jargon? That part isn’t clear yet.