Artificial intelligence has always been good at math. It’s better at math than humans. Deep Blue beat Garry Kasparov at chess in 1996 because chess can be reduced to a game of logic. And you can program logic equations into a computer without much trouble.
But 15 years later when IBM’s Watson won Jeopardy, the AI was doing a lot more than math.
Since the sixties, most advances in AI were thanks to computers getting faster. Faster computers can do more calculations and more data processing in the same amount of time. Adding the Internet to this mix, we started to see different types of data being generated in large quantities.
For a long time, a lot of this new information was not useful for computers. Language, or text, was basically a series of unreadable hieroglyphs. Audio, too, didn’t make sense to computers like numbers did. That was until natural language processing (NLP) emerged.
Natural language processing wasn’t invented and suddenly used for the first time in 2011 when Watson won at Jeopardy. It had been around for a while. It just hadn’t been any good.
But once there was enough data, and fast enough computers to make NLP useful, it opened the doors to a whole other world of problems that computers could help to solve. Suddenly, non-mathematical problems were fair game.
Why we’re starting to notice AI (again)
Watson winning jeopardy had this effect of getting people to imagine the possibilities of AI. They were able to transfer what they saw happening on TV to “Oh shit, that thing could cure cancer or run logistics—maybe it’s feeling…” It got us to think about technology’s progress and imagine its ubiquity (a word that has become ubiquitous itself as a buzzword, yet didn’t always elicit a real image of what that looked like).
Though, this isn’t AI’s first wave of excitement. It’s gone through hype cycles in the past, starting out with fast burning springs followed by long winters. One reason excitement fizzled out in the past is that AI didn’t deliver on expectations. AI has an awkward definition: a computer’s ability to perform actions normally reserved to humans. But, once it becomes possible for the computer to do, it’s no longer reserved for humans, and it’s suddenly just another application rather than AI. It’s taken a lot of media effort to draw people’s attention to the fact that AI has been a part of our everyday lives for many years.
Recently, though, there have been some large leaps in what AI can do. Today, computers are very good at talking, recognizing faces in images, translating your voice to text and dealing with all sorts of other non-mathematical problems. AI-powered computers can recommend restaurants, social events, read MRIs and prevent lawyers from reading really boring documents.
Solving these non-mathematical problems are capturing people’s attention and imagination. More importantly, these advances in AI aren’t being kept behind IBM, Google and Microsoft’s closed doors.
For small businesses, big tech companies like Microsoft and Google sell simple software plugins (called APIs) that can do basic AI-enabled tasks on your laptop. If you have data about your business, they can probably help you find valuable information on it.
However, it’s a sort of “trickle-down economics” right now with AI. A lot of the most exciting applications are only being deployed within the big tech companies who have the required data and expertise to innovate for themselves. The capabilities that trickle down into APIs for smaller businesses are very narrow compared to these more powerful applications.
The next challenge is making more (if not all) of those transformative applications accessible for mom and pop businesses. The reason it hasn’t happened yet is that AI requires a huge amount of data to work. Many of the widely available applications are based on public data sets. For instance, Wikipedia provides an amazing database for learning languages and translation. But, it’s extremely difficult for an SMB to provide a similar amount of data because their business’s operations just aren’t at the needed scale. The only AI they could develop would have to be way less sophisticated than what enterprises are doing.
If we are to get AI for relatively small cases, or really, more individualized data, AI needs to get better at figuring out context. That would mean an AI that can pull relevant, but different types of data set to add to the small data set and aggregate them into something large enough to make a prediction. In other words, we need AI that can see the bigger picture.
This AI with perspective will come from developing transfer learning and representation learning. Transfer learning is the ability to apply to learn in one domain to another. Connecting a capability for identifying trucks in video and another capability for identifying sedans will make a capability that is overall better at identifying all cars. Representation learning is making data too complicated to be used in AI into something more easily process. As we improve on both representation and transfer learning, we will be able to connect more applications for more individualized data.
These challenges with small data problems are something I find particularly interesting because there are so many applications, it’s a very big and very long tail. However, I don’t see big tech companies moving very quickly to make AI happen for SMBs, and I don’t see any SMBs who have been able to figure out how to develop AI for themselves without solving this context problem. [If you know of any SMBs developing their own AI, let me know. Startups are maybe an example of an SMB, but their AI products are for enterprise or end consumers, so they don’t really count.]
In any case, many big businesses are deploying AI wherever they can, which means that AI is meeting, if not beating, enterprise expectations. Happy customers will mean more investment, and more investment will hopefully mean that soon mom and pop will be having their fun, too.
This post is the first of many where I explore why AI is taking off again, and whether we’re likely to face another AI winter. These posts will vary between short attempts at explanations, long attempts at explanations, and perhaps a few proper articles. Let me know what you think!