Business

Norman, the 'psychopath' AI, learned from the dark corners of Reddit

Posted June 7, 2018 10:44 a.m. EDT

— Norman always sees the worst in things.

That's because the data used to create Norman, an AI-powered "psychopath," made him that way.

Norman, developed by MIT Media Lab, serves as an example of how the data used to train artificial intelligence matters deeply.

MIT researchers say they trained Norman, an algorithm, using captions on graphic content and images posted about death on the "darkest corners of Reddit," a popular message board platform.

The team then examined Norman's responses to inkblots used in a Rorschach psychological test. Norman's responses were compared to another algorithm's reactions, which had standard training. That algorithm saw flowers and wedding cakes in the inkblots. Norman saw images of a man being fatally shot and a man killed by a speeding driver.

"Norman only observed horrifying image captions, so it sees death in whatever image it looks at," the MIT researchers behind Norman told CNNMoney.

Named after the main character in Alfred Hitchcock's "Psycho," Norman "represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms," according to the MIT's website on Norman.

We've seen examples of trained data gone wrong before. In 2016, Microsoft launched Tay, a Twitter chat bot. At the time, a Microsoft spokeswoman said Tay was a social, cultural and technical experiment. But Twitter users provoked the bot to say racist and inappropriate things, and it worked. As people chatted with Tay, the bot picked up language from users. Microsoft ultimately pulled the bot offline.

The MIT team thinks it will be possible for Norman to retrain its way of thinking via learning from human feedback. Humans can take the same inkblot test to add their responses to the pool of data.

According to the researchers, they've received more than 170,000 responses to its test, most of which poured in over the past week, following a BBC report on the project.

MIT has explored other projects that incorporate the dark side of data and machine learning. In 2016, some of the same Norman researchers launched "Nightmare Machine," which used deep learning to transform faces from pictures or places to look like they're out of a horror film. The goal was to see if machines could learn to scare people.

MIT has also explored data as an empathy tool. In 2017, researchers created an AI tool called Deep Empathy to help people better relate to disaster victims. It used technology to visually simulate what it would look like if that same disaster hit in your hometown.