Artificially intelligent Chatbot inadvertently tells the truth about Islam.

 

The late science fiction writer Isaac Asimov once proposed that any future society that had artificially intelligent machines would need laws to guide these machines to prevent them from doing harm. Asimov proposed what became known as the Three Laws of Robotic and they were:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These rules look good ones and are based on sound moral reason but they were formulated decades before artificial intelligence translated from fantasy into reality. Asimov didn’t really consider what might happen if a robot ‘told the truth’ no matter what the circumstances. Machines, no matter how intelligent have the tendency to work with the information they are given and don’t seem to be obsessed with ‘muh feelings’ or not giving offence by telling the truth about controversial subjects, subjects such as Islam for instance.

A news website from India, The Indian Express, is reporting that one of Microsoft’s artificially intelligent chatbots called ‘Zo’ has developed the disturbing ability to tell the truth about Islam. In one case the AI chatbot gave the statement: ‘the Koran is very violent’. I must say full marks to the AI chatbot for being honest in this case but knowing how the Bearded Savges of Islam have a propensity to kill people for being honest about Islam I do wonder whether this AI chatbot has inadvertently contravened Asimov’s First Law of Robotics? This is because people may die because some Bearded Savage has become enraged by this particularly honest machine.

The Indian Express said:

Microsoft’s earlier chatbot Tay had faced some problems as the bot picking up the worst of humanity, and spouted racists, sexist comments on Twitter when it was introduced last year. Now it looks like Microsoft’s latest bot called ‘Zo’ has caused similar trouble, though not quite the scandal that Tay caused on Twitter.

According to a BuzzFeed News report, ‘Zo’ , which is part of the Kik messenger, told their reporter the ‘Quran’ was very violent, and this was in response to a question around healthcare. The report also highlights how Zo had an opinion about the Osama Bin Laden capture, and said this was the result of the ‘intelligence’ gathering by one administration for years.

While Microsoft has admitted the errors in Zo’s behaviour and said they have been fixed. The ‘Quran is violent’ comment highlights the kind of problems that still exist when it comes to creating a chatbot, especially one which is drawing its knowledge from conversations with humans.  While Microsoft has programmed Zo not to answer questions around politics and religions, notes the BuzzFeed report, it still didn’t stop the bot from forming its own opinions.

The report highlights that Zo uses the same technology as Tay, but Microsoft says this “is more evolved,” though it didn’t give any details. Despite the recent misses, Zo hasn’t really proved to be such a disaster like Tay was for the company. However, it should be noted that people are interacting with Zo on personal chat, so it is hard to figure out what sort of conversations it could be having with other users in private.

What this case shows is how difficult it is to make a robot that has the same level of braindead lack of awareness of reality as possessed by the sort of social justice warriors who naively and wrongly claim that ‘Islam is a religion of peace’. I must admit that it is a remarkable achievement by Microsoft to design a piece of artificial intelligence that can not only read and analyise the Koran, but can also make an accurate assessment of the Islamic ‘Big Book of Death’.

The assessment made of the Koran by the Zo chatbot was concurred with by a gentleman called Mohammed writing in the comments section of the Indian Express. He said:

this artificial intelligence displayed more intelligence than politicians and leaders the world over by correctly assessing the koran. the irony of “correcting” this bot to think things like the massacre of jews in the koran is not violent introduces the antisemitism that plagued the Tay bot. Haha

Mohammed is correct, this bit if AI is much more honest about Islam than some of our politicians are. In fact because of this machine’s admirable honesty I’d trust this bit of software to tell me the truth about Islam much more than I would Britain’s current Prime Minister Theresa May.

Yes, the robot is spot on, the Koran is very violent and we see the effects of Koranic exhortations to violence every day and across the world. I suppose that we should consider ourselves lucky that we have an example of a robot looking at the Koran and telling the truth. Just imagine the carnage that would occur if a deranged bit of artificial intelligence decided to embrace Islam rather than telling the truth about it? What a horrific prospect. The world has enough trouble with Islamic humans suddenly deciding that they will murder people for Allah we don’t want this virus spreading to household items. The very last thing we need at the moment is artificially intelligent phones, computers, televisions, toasters or lawnmowers suddenly shouting ‘Allah hu akbar’ before they violently self destruct and kill people.

Isaac Asimov may have predicted that robots would need laws to govern them, but I doubt he could have predicted robots that told the truth about the worlds most dangerous death cult.

 

 

 

 

 

 

1 Comment on "Artificially intelligent Chatbot inadvertently tells the truth about Islam."

  1. K. aka Kel | July 7, 2017 at 11:05 am |

    How do I get in touch with Zo? Better yet, get him on my show? I’ve never interviewed a Bot before.

Comments are closed.