Breakthroughs in artificial intelligence in the last decade have been extraordinary. But sometimes it gets things wrong, like really wrong.
If you didn’t know, AI is that thing that flies airplanes, drives self-driving cars, stops spam, makes recommendations to us on Amazon, Netflix and Spotify and it powers the voice-controlled assistants that are in our pocket.
In these situations, there is no doubt AI can be extremely impressive but it’s fair to say that there is a lot of hype surrounding AI as the tech elite in Silicon Valley swear that AI is going to be the savior of our world, saving humans from all their problems. Whenever I hear these promises I can’t help but think of the overweight humans depicted in the Pixar film WALL-E, who hover around in chairs watching TV all day while drinking soda pop.
But I digest. It’s fair to say that there are many extremely helpful and impressive examples of what AI can do but what happens when AI gets things wrong? Below is my compliation of some of the examples for when AI doesn’t quite get things right which is really a reminder that they aren’t foolproof and still require a great deal of human intervention for teaching, leading, guiding and judging outputs.
Let’s get started…
Target Figured Out A Teen Girl Was Pregnant Before Her Father Did
Using AI, computers crawled through Target’s collected data and identified 25 products that, when analyzed together, allowed them to assign each shopper a “pregnancy prediction” score. With this information Target could then estimate a woman’s due date to within a small window, so they could send coupons timed to very specific stages of a woman’s pregnancy. Below is an excerpt taken from a New York Times Article titled “How Companies Learn Your Secrets” that broke this story:
…a man walked into a Target outside Minneapolis and demanded to see the manager. He was clutching coupons that had been sent to his daughter, and he was angry, according to an employee who participated in the conversation.
“My daughter got this in the mail!” he said. “She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?”
The manager didn’t have any idea what the man was talking about. He looked at the mailer. Sure enough, it was addressed to the man’s daughter and contained advertisements for maternity clothing, nursery furniture and pictures of smiling infants. The manager apologized and then called a few days later to apologize again.
On the phone, though, the father was somewhat abashed. “I had a talk with my daughter,” he said. “It turns out there’s been some activities in my house I haven’t been completely aware of. She’s due in August. I owe you an apology.”
While researching this story I came across this article that counters details of the New York Times story.
Microsoft’s Racist Chatbot
In 2016 Microsoft built a teen-talking AI chatbot called Tay that was created to mimic and converse with Twitter users in real time. Unfortunately, people took advantage of how Tay learns by manipulating it into saying sexist, racist and other really awful untrue things.
Read more about Tay here.
Bizarre AI Generated Recipes
Research Scientist Janelle Shane shares that she let a neural network look at about 30,000 existing recipes to see what types of cookbook recipes it could create on its own. Janelle quickly realized that it wasn’t a great way to get something of value. She writes that using a neural network wasn’t a great way to get a logical recipe because the neural network has a memory of only a few words long, and no concept of what it is actually choosing. This resulted in recipes that were unwise and at times impossible to even make…but totally hilarious. Be sure to read through the other recipes out-loud for added entertainment.
(h/t @bethanydesign)
Share this: