Researchers Using AI and Machine Learning to Detect Cyber-Bullying
To find less-obvious forms of abuse, Dinakar built software that compares online posts to an open-source database called ConceptNet. This is a network of phrases and words and the relationships between them that lets computers understand what humans are talking about. This way the system can work out what might be a bullying comment, even though it contains no abusive words.
For example, it would know that: “Put on a wig and lipstick and be who you really are” aimed at a boy might be a negative comment on his sexuality, because ConceptNet knows that girls usually wear make-up, while boys do not.
The idea is that software like this could be integrated into a social network. If it spots patterns of bullying behaviour, it may either flash up a box warning the bully, ban offending posts, or offer help and advice to the victim. Dinakar wants to combine his two projects to create a detector that can pick up even the subtlest of attacks, such as “liking” a negative Facebook status to make a nasty point, for example. The research is due to appear in the journal ACM Transactions on Interactive Intelligent Systems in July.
(via AI systems could fight cyberbullying - tech - 03 July 2012 - New Scientist)