While running i listend to a On The Media interview with the autor of the Wired piece on content moderation that was making the rounds this week. After a while the interview addresses the relatively obvious question of why content moderation is still a task that can only be carried out by humans:
Brooke Gladstone: I think that one of the things that struck me is that this work demands human beings clued in to American mores and laws. This has to be done by brute force of eyes and clicking fingers. Is there no alternative to human moderation?
Adam Chen: Well everyone i talked to said that there was no way a robot could do all of this. They can come up with programmes and algorithms that will make it more effective and more streamlined but there is always going to be somebody who has to looks at it. And also the kinds of moderation that si going on is becoming more nuanced and complicated. And so i think you always gonna need people and probably more and more people as time goes on.
What struck me when listening to this is exchange that this is just one instance of a much broader problem, namely the current inability to encode moral judgement in algorithms. Once we have ‘solved’ this issue those poor schmucks who have do do content moderation for the rest of us will be out of job (which sounds as a good thing) but that will also be the moment where have to start dealing with killer drones/robots that do not require human interventions before firing their weapons.