SafeToNet has screened more than 65m texts sent since November and found that girls aged 10, rather than teenage boys, as they had expected, use the most explicit and potentially harmful sexual language. The SafeToNet app looks for language indicating sexual talk, abuse, aggression and thoughts about suicide and self-harm.
It applies a threat level to each and year-old girls were the most prominent in category 3 of sexual references, which relates to the most explicit and harmful language. In December, it emerged that more than 6, children under 14 have been investigated by police for sexting offences in the past three years, including more than of primary school age.
We think it is a rite of passage and is related to that rather than actual sexual activity. SafeToNet also found that while girls in general use more sexually explicit language than boys, boys are more abusive and aggressive, and children fear bullying the most on a Sunday evening.
The analysis provides a window into the often hidden online lives of eight- to year-olds. Half of year-olds have a smartphone and ownership doubles between the ages of nine and 10, according to the regulator, Ofcom. Almost half of parents of children aged five to 15 are concerned about their child seeing content that could encourage them to harm themselves, the regulator found. It provides parents with a report about the level of risky language their children are using but does not reveal what they wrote.
As worrying as the findings may be for parents , there was a glimmer of hope in that when children spend more time with their families and screen time drops, so does some risky behaviour. SafeToNet employs a team of linguists and psychologists specialising in online behaviour to programme the algorithm that screens texts. It uses artificial intelligence to contextualise what users are typing so it only flags phrases if they are being used in a way that indicates potentially harmful behaviour.
The system notices patterns that could indicate risk. Rapid exchanges of short texts can indicate bullying or sexual dialogue. Worse language may trigger a red light and block its dispatch. It works by overlaying its own keyboard on whatever social media apps children are using in order to monitor what they are writing. Using an algorithm, it feeds back to the user in real time if what they are typing is considered risky, using colour coding.
Parents do not get to find out what their children are writing, but are instead provided with a risk score. SafeToNet says the app focuses on sexual language and patterns of behaviour such as sexting; abuse and aggression and notably bullying; issues of low self-esteem and notably dark thoughts, anxiety and stress.
Topics Internet safety. Reuse this content. Most popular.