What the creation of the first AI psychopath means for Human Kind





Named Norman after the infamous character Norman Bates from the film Psycho, he was created to be turned evil. 

MIT wanted to study how AI can be corrupted if it accesses a certain type of data. When companies have conducted experiments in the past using AI it has shown that they can be very susceptible to biased data. 

If you can remember Tay, Microsoft’s chatbot that was launched on Twitter in 2016, you may remember that it doesn’t take an awful lot of negative information and biased data to seriously influence the output of an AI. 



Tay was unleashed on Twitter for a measly 16 hours before it had to be taken down again. Microsoft had designed Tay to interact with users as a 19-year-old American female would, instead it’s outputs became racist, rude and really offensive. 



MIT took this to the extreme with Norman.

They exposed Norman to Subreddit, specifically images and captions dedicated to death and violence. They then conducted a series of experiments on the AI, comparing it’s results to an unbiased AI and a human.

Using an online version of the Roschach test (a test where participants are asked to interpret ink blot images) they discovered that Norman was seeing horrific images in everything it received. Some of these images and the responses they received are below. 

electrocuted norman
“Man is electrocuted and catches to death” Norman said.


 “Man is shot dead” Norman responded.



“Man is shot dead in front of screaming wife”. This is a huge contrast to the interpretation of a standard AI who reported the following “A person holding an umbrella in the air”.

The difference is astonishing. Luckily Norman will not be unleashed on the world, instead MIT have launched a website where people can enter ‘nicer’ interpretations of the inkblot test to get Norman back to his old self again. 

It does make you wonder though, should we really be developing AI technologies?

Stephen Hawking didn’t think so. In an interview with the BBC in 2014 he said the following.
The development of full artificial intelligence could spell the end of the human race…. It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.


When one of the most intelligent people on the planet advised that creating AI is a bad idea, what do scientists do? They do it anyway.

With numerous dystopian movies and novels featuring AI in a negative light and with development of these technologies increasing at a rapid rate, it makes you wonder where we will soon be living in a world like the one in I, Robot or Terminator.

After Tay was taken down in 2016, Microsoft reported that it was not the robot that had an issue, instead it was the people it interacted with on the internet that had trolled and corrupted the robot’s responses. 

One Twitter response said the following:
When the machines take us out, they will do so only because they learned from us.

Let’s hope we don’t get to that point anytime soon.


Want to read more from RLR Distribution? Check out the You might lose your job..to a robot post.

Comments