OK, this is very geeky but I don’t know what the public implications of it might be so I’ll post it here in case you see some ‘leakage’.
Google is developing a means to decide if text entered into public forums might be ‘toxic’. Toxic text might be insulting, rude or perhaps even racist / illegal.
Discourse allows us to check the content of the posts as they are entered, to see the toxicity level and then, IF WE WANT, take some action.
My understanding is that we can configure Discourse to tell us before we press reply if the post might be offensive.
As much of what we right on here is likely to be offensive anyway, my hunch is that we’re going to get lots of false positives. But it might also be useful for the odd occassion.
If it interferes too much, we’ll just turn it off.
Details on Google Perspective here.
* Image credit.