AI and related topics are quite the soup of the moment. Here’s a glimpse at Google Trends over the past couple of years:

Google trends AI

And Gartner’s most recent CMO spending and strategy surveys show nothing but green arrows up for marketing analytics. The popular job site Glassdoor reported that “data scientist” was its hottest job category in 2017 and had a median base salary of a solid $110,000, plus free smoothies.

Yap yap: Yet we know there is more talk than action in the CMO’s world. According to another survey we fielded for our Magic Quadrant for Digital Marketing Analytics (clients admire here), only 7 per cent of day-to-day use cases involved AI — most of which would have been called machine learning last year and predictive analytics the year before that.

So why worry? Because it’s real, it’s here, it’s trenchantly disruptive and we marketers need to know what it is. So what is AI? Is it a synonym for machine learning? Should I be afraid?

The answer is no. In some ways, AI is just machines doing very simple things over and over and over at a scale and speed we can’t imagine (although our brains are doing something similar now). It’s not mysterious, just mind-blowing.

Hot Dog or Not?

In Season 4, Episode 4 of HBO’s Silicon Valley, a story arc was built around an app called “Not Hotdog.” An actual app for your iPhone or Android, “Not Hotdog” used an AI image recognition algorithm to determine whether a photo you gave it likely contained a classic hot dog (or not), like so:

The app was built by Tim Anglade, who works for the show (among other things), and this most excellent person generously shared his methods in this fascinating post. For details, I recommend the source, full of mustard and drama, and what I’ll do in a moment is give you a, um, taste — a nibble — of the way a marketing data scientist might approach this kind of problem using just a laptop, some additional GPUs, and time.

But first, a little throat clearing from a recent research note called “Master Data Science Basics for Marketing” (clients enjoy here).

Deep Learning & AI

First described in the 1950’s, artificial intelligence is the attempt to make computers do things humans do. It can be divided into so-called “weak” AI, which focuses on specific problems such as interpreting language (ie, natural language processing [NLP]) and “strong” AI, which attempts to build more general systems, such as Google Brain. It is an umbrella term that includes machine learning, as well as other fields such as symbolic reasoning.

(There’s a lot of very good background info and resources at this website maintained by a prominent investment bank in, yes, Silicon Valley.)

Deep learning: Deep learning is a subset of machine learning that uses artificial neural networks to detect very precise patterns in data. Generally requiring massive amounts of info and raw computing power, it was not a widely available technique until recently. It is useful for categorising unstructured data such as images, text and speech, and for performing complex tests and making predictions.

neural net

Artificial neural networks (ANNs): First described decades ago, ANNs use a large number of neurons or nodes that are arranged into layers. The neurons are weighted and reweighted during the training phase of the modeling process. In the case of deep learning, there are a number of hidden layers between the input and the output nodes.

There are many different types of ANNs that are used by marketing data scientists to solve particular problems. For example, so-called convolutional neural networks are particularly useful for recognizing images, eg. the brand’s logo in photos posted on blogs or social networks.

Back to the Hot Dogs

The directive for Silicon Valley was to create an app that would run directly on a phone, without requiring a network connection. The entire development lab setup for this fancy piece of AI wizardry (including water bottle) looked like this:

Anglade’s post walks through a process of trial and error, typical of these kinds of projects. You’ll see here a couple of themes: (1) Google and Facebook are all over AI, subsidising basic research in extraordinary ways; (2) it’s a fast-evolving field, full of wily grad students posting code on Github in at midnight; and (3) there’s a lot of, as we said, trial and error.

Cast of Characters

Some of the players encountered in the course of the drama:

  • React NativeThis is a popular development framework out of Facebook that was used to build the app itself, the U.I., etc.
  • Google Cloud Platform’s Vision API: This is a very handy service that will take an image as an input and return a set of probabilities and labels that might (or might not) describe the object. This character was let go early on, however, because it wasn’t quite accurate enough re: hot dogs (or not).
  • ImageNetA legendary repository of labeled images (about 14M at last count), ImageNet is used as raw data to train vision recognition models. These images have been the basis of a yearly competition, held since 2010, to develop the best neural recognition architecture. This contest matters more than the Oscars, in certain overeducated circles.
  • TensorFlow and KerasThese are commonly-used open source libraries that can work together. (Libraries are just a collection of pre-written code blocks that can be included in your programs by reference, saving you a lot of time.) TensorFlow was developed by (yes) Google, and it provides ways to train neural networks, among other things. Keras is a library that runs on top of TensorFlow and is used to train and test neural networks.

Anglade also used some open source code offered by a Turkish grad student of an implementation of a technical paper published by (zzz) Google the day before, as well as some additional techniques to “tune” the model. The program was written in Python.

As usual, a lot of time was spent tweaking the model to make it more accurate and enabling it to handle tricky situations, such as photos at weird angles or photos with hotdog-like things that weren’t hotdogs. It seems that collecting and selecting the data (ie, the images for training) was a major project itself.

Any data scientist will attest that building models is almost easy compared to assembling the data.

In the end, the model was trained using 150K images (the vast majority of which were not “hotdogs”) that were run through the network 240 times. This process took about 80 hours, or a long weekend.

The Moral of the Story

Don’t fear the AI.

A neural network like the one used by “Not Hotdog” is just a collection of “neurons” and weights. It takes prelabeled images (“hotdog” and “not hotdog”) and in a very resource- and time-intensive process tries to figure out what the “hotdog” images have in common. The similarities are translated into a series of weights and filters that can be applied to new images.

What comes out? Just a label (with a probability): Hotdog or Notdog.

Bon appetite.

*This article is reprinted from the Gartner Blog Network with permission. 

Previous post

Nvidia Profile: Powering Artificial Intelligence, Augmented Reality and Virtual Reality

Next post

The Dos and Don’ts of Building a Brand Tribe

Join the digital transformation discussion and sign up for the Which-50 Irregular Insights newsletter.