From Elsewhere – The danger of automated ‘speech police’

 

I’d like to draw your attention to an excellent and timely article that was written by Dr Paula Boddington and originally published in Spiked magazine but which I saw on the conservative leaning site Front Page Magazine. In this piece Dr Boddington writes of the dangers of using technology to police the opinions of humans and the people who set up machine learning systems to police ‘hate speech’ are themselves biased. The result of this is that they export their biases into the artificial intelligence systems that they create.

I’ll put a couple of excerpts from this piece below but I would most strongly counsel that you click through to the link and read the whole article. It is a useful and valuable resource not just on the dangers of automatic speech police systems but also the biases of those academics who set them up.

Dr Boddington said:

There are plenty of reasons to worry about the concept of ‘hate speech’. There are also specific concerns about the notion of Islamophobia, especially in light of controversial recent moves by the All Party Parliamentary Group (APPG) on British Muslims to produce a definition. Both concepts are subjective and hard to pin down. But it gets worse. For around the globe, a cottage industry is springing up, attempting to devise ways to automate the detection of online ‘hate speech’ in general, and of ‘Islamophobia’ in particular.

The aura of scientific objectivity that goes along with the computerised detection of ‘hate’ online is very dangerous. You can’t make a loose and fuzzy idea rigorous by getting complicated algorithms and sophisticated statistical analysis to do your dirty work for you. But you can make it look that way. And worryingly, many of those working to automate ‘hate speech’ detection have direct influence on governments and tech firms.

I believe that we are seeing this already with the massive wave of bannings of social media users who are not engaging in activity that many sensible people would accept as wrong, such as credibly inciting violence, but for merely criticising Islam.

Dr Boddington went on to speak on the subject of those who create the AI that runs automated ‘hate speech’ police. In reference to a study carried out by Bertie Vidgen and Taha Yasseri of the Oxford Internet Institute Dr Boddington said:

The researchers took samples from the Twitter accounts of four mainstream British political parties: UKIP, the Conservatives, the Liberal Democrats and Labour. It then incorporated 45 additional ‘far right’ groups, drawn from anti-fascist group Hope Not Hate’s ‘State of Hate’ reports. For academics trying to be rigorous, this is unfortunate, since it is not always clear how consistently Hope Not Hate applies the label ‘far right’. What’s more, Hope Not Hate has a regrettable habit of calling people ‘wallies’, which hardly makes their work appear rigorous or impartial.

Islamophobia is defined, in this study, as ‘any content which is produced or shared which expresses indiscriminate negativity against Islam or Muslims’. Attempting to introduce a degree of nuance, a distinction is made between ‘strong Islamophobia’ and ‘weak Islamophobia’.

The methodology Vidgen and Yasseri use is similar to that of the ADL – they had humans assess tweets, then used machine learning to train computers to continue the work. The first weak spot is, of course, the human assessors. The authors report that three unnamed ‘experts’ graded tweets from ‘strong Islamophobia’ to ‘weak Islamophobia’ to ‘no Islamophobia’. I’d be willing to bet a fiver that not one of these ‘experts’ is critical of the concept of hate speech. Broad agreement on grading between these ‘experts’ is hailed as proof of their rigour – but it may simply be proof that they share certain biases. The subsequent application of machine learning would only magnify such bias.

I tend to broadly agree with Dr Boddington here. It is a case of ‘garbage in equals garbage out’. If researchers who themselves have biases and who may also be working with flawed information or stuff from flawed sources, then any AI system that is based on such information or sources is going to be equally flawed. Dr Boddington is correct, we should worry about where AI ‘hate speech’ policing is going and we should also worry about what sort of information is being fed in to such systems.

Please take the time to read the entire article by Dr Boddington which you can find via the link below

https://www.frontpagemag.com/fpm/272684/automated-speech-police-paula-boddington

2 Comments on "From Elsewhere – The danger of automated ‘speech police’"

  1. We have now reached the stage where you have to intentionally make typing errors to disguise what you are really writing about!

  2. Hilltop Watchman | January 31, 2019 at 8:38 pm |

    It seems that for all of the examples from history, we STILL won’t learn from history and will be repeating it ad-nauseum. Always some nanny who thinks THEY are cleverer and can do it better this time. Always another nanny who thinks they’re better than us proles to the extent they can tell us what to do, think, say, eat.

Comments are closed.