With the government poised to implement tough new measures to protect children online, new research from Ofcom reveals how regularly children encounter troubling content online—and how parents aren’t using some of the safety tools at their disposal.
Research conducted by the regulator found that more than a third (36%) of children aged 8 to 17 have seen something “worrying or nasty” on the internet in the past 12 months. That includes bullying: among the 39% of children who have experienced some form of bullying, 84% said it happened online, compared to the 61% who had experienced in-person bullying.
Reassuringly, children are likely to tell an adult when they've seen harmful content or behaviour online. Six in 10 (59%) said they would always report bad content, with younger children and girls more likely to raise concerns with parents and educators.
But overall, a fifth of parents of children aged 3 to 17 said their children had told them about something online that had scared or upset the in the past 12 months. Among those parents, 89% spoke to the child about the experience, with 59% advising the children to stop using the particular app or site and 55% advising them to block the people and content concerned. Meanwhile, a quarter responded by setting up filters or parental controls.
Ofcom found in general that parents have a high awareness of “safety promoting technical tools and controls,” although the use of these features is less prevalent.
The regulator found that nine in ten parents are aware of tools that can help them manage their children’s access to online content.
These include parental control software set up on devices such as Net Nanny, McAfee Family Protection and Open DNS FamilyShield; content filtering tools offered by broadband providers; service-specific filtering options such as Google SafeSearch, TikTok Family Safe Mode, and YouTube Restricted mode; and apps installed on a child’s phone to monitor their usage. Around seven in 10 parents reported using at least one of these types of controls.
The most popularly-used parental controls were those built into the device software by the manufacturer, such as filters on Windows, Apple, and Playstation devices. These are used by 31% of parents.
Just a quarter (27%) of parents use the content filters offered by their broadband providers. This is despite 61% of parents being aware of those features.
Among parents who are aware of these network-level filters but don’t use them, 45% said they trusted their child to be sensible online. 44% said they preferred to supervise their child’s online activity by talking to them and setting rules.
But large numbers of parents questioned the usefulness of these online filters from broadband providers. 18% said ISP filters block too much content or get in the way of their use of the internet, while 11% said they don’t block enough. Another 7% of parents said their child could find a way around the filters.
The UK is preparing for a sea-change in the way harmful online content is filtered and policed. Under the proposed Online Safety Bill, online platforms such as social media sites and search engines will be given a duty of care toward their users and be required to take action against both illegal and legal but harmful content on their sites. Platforms that fail this duty will be liable to fines of up to £18 million or 10% of their annual turnover, whichever is higher. The government has said the bill will “protect children from harmful content such as pornography and limit people’s exposure to illegal content, while protecting freedom of speech.”
With the government poised to implement tough new measures to...
Budget broadband provider TalkTalk has been notifying customers via email...
A year-long investigation by charity Citizens Advice has revealed a...
Education Secretary Nadhim Zahawi has announced a new commitment to...