
Cyberbullying is just as dangerous and huge a problem as regular bullying. In fact, in our age, it’s becoming even more abhorrent, facilitated by the spread of technology.
Last year, a 12-year-old girl from Florida committed suicide by hanging herself. As the investigation of her demise found out, the reason for her taking her life was cyberstalking and cyberbullying. Two more 12-year-olds were taken into custody after the incident for spreading rumors about the victim online and for inciting her to commit suicide.
While bringing many new possibilities to education, technology and Internet accessibility can become a means to harm another person, as seen in this tragic case—and, unfortunately, many others. Part of the problem is that children this young, both bullies and victims, often do not think their actions through and act hastily.
That’s why we need every solution available to help prevent cyberbullying. And with advancements in AI, there might be a great new solution on the way.
Statistics
According to the research conducted by Sameer Hinduja and Justin W. Patchin, out of around 5,000 surveyed middle and high school students in the US, more than a third have experienced cyberbullying. At the same time, almost 15 percent have participated in cyberbullying others.
Children are often targeted because of their appearance, performance, disabilities, religion, and other factors.
The overall number of children who have experienced cyberbullying has increased by 3 percent since 2016.
Statistics for the UK also show how serious this problem is. Out of 9,000 people aged 12 to 20 surveyed by DitchTheLabel.org in 2018, 46 percent (4,140 respondents) have been bullied online at least once.
Dangers that lie in cyberbullying
Nowadays, children—and the majority of cyberbullying victims are children—have access to devices that allow them to communicate with each other almost anywhere and anytime. Moreover, it gives not only other children but also grown-up malevolent persons an easy pathway to a child’s private life.
This is what makes cyberbullying so dangerous. In the case of regular bullying, teachers (and parents) have a clearer picture. If one student abuses another physically or verbally in class, it is obvious that measures should be taken: the abuse must be stopped, both the attacker and the victim should be talked to, their parents should be informed about the incident and so on.
When technology comes into play, however, it’s nigh impossible to tell if an instance of bullying takes place unless a participant or witness informs an adult about it. (And for many children, it may be very difficult to bring themselves to tell anyone about what happened).
Besides, due to its non-obvious nature, cyberbullying can sometimes be disregarded as not important. According to StopBullying, in some states, cyberbullying does not count as behavior that schools must respond to.
Another danger is that with cyberbullying, the victim may not even know who is behind it thanks to the possibility to anonymize one’s online presence. While it would be most definitely possible for law enforcement to identify the bully, this situation may further discourage the victim from speaking up.
It is apparent that the current levels of cyberbullying-awareness must be raised among people of all ages. There exist Internet safety guides for children as well as for parents and teachers, Stop Cyberbullying Day is recognized internationally and endorsed by profit and non-profit organizations, governmental programs work to help the victims, etc.
All of the above are necessary and are fighting a very important battle. But let us look at how we can use technology to help fight it.
Machine learning against cyberbullying
Since technology is used in online bullying, it’s natural that technological solutions to the problem are sought.
Machine learning opens up a lot of possibilities to prevent cyberbullying. Currently, there are many initiatives to create and train algorithms that are able to detect hate and abusive speech online to block the user from seeing it and, therefore, getting cyberbullied.
The advantage such algorithms have over parental control software and keyword-spotting blockers is that they should recognize subtle and sarcastic comments—a task that the former solutions can’t cope with. Besides, the use of machine learning is necessitated because slurs and insults can often be, intentionally or not, misspelled.
In the paper Automatic detection of bullying in social media texts, a group of researchers describes such an algorithm. In its experimental state, it did considerably well recognizing abusive behavior online in English and Dutch. The scientists behind this project call the fact that their system can detect signals of bullying their main achievement.
This algorithm also determines who the bully, the victim, and the bystanders are in each situation, which can help a human moderator of a website do their job faster and more efficiently.
Another study brings the limitations of keyword filtering to closer attention. Based on data collected on Reddit, the researchers have arrived at the conclusion that many topical words are used by both hate groups and support groups, making it more difficult for a filter to determine which is which.
Instead, this study proposes to train an algorithm to recognize data produced by communities that “conform to linguistic identity” of hate groups. With that, it will be able to see patterns typical for such groups and communities in posts on social media and other Internet resources.
IBM Watson artificial intelligence is used by Identity Guard and Megan Meier Foundation to monitor children’s social media activity for signs of bullying or suicidal thoughts. If it sees any, it then reports such instances to the parents along with helpful resources aimed at dealing with the situation.
Cyberbullying causes a lot of problems for its victims but the most devastating one occurs when emotionally unstable people are targeted. As was the case with 12-year-old mentioned at the begining of this story, sometimes it can lead to self-harm and even suicide.
Machine learning algorithms can already identify suicidal tendencies with high accuracy based on “neural representation of emotions.” In the future, emotion AI can become an invaluable asset in recognizing and preventing self-harm. If a machine is capable of understanding human emotions based on voice or facial recognition, it will mean a tremendous leap forward for the use of AI in suicide prevention.
Despite their early stages being quite successful, AI anti-hate speech algorithms still have a long way to go. The biggest challenge for machine learning is probably how difficult it is to determine what is and is not hate speech. For example, the same sentence can be hateful and not depending on who writes it, their cultural and racial background, etc.
Besides, the definition of hate speech varies from community to community and from person to person. It can also change with time and new cultural norms emerging.
While most of the solutions mentioned above are still in the experimental stage, they all show a lot of promise. It’s important to remember, though, that even when they are properly implemented, it won’t mean the end of cyberbullying. On the contrary, something of an arms race can be expected between anti- and pro-bullying technologies.
So, it’s necessary to use AI for assistance but not expect it to do all the work.