Social media safety measures and why Instagram’s new nudge feature isn’t enough

Sep 20, 2022 |4 min read

Exploring recent social media safety measures and their inadequacy to make real and long-lasting change to online safety.

In June 2022, Instagram introduced a new nudge feature to ‘nudge’ teens away from harmful content. The feature notifies users when they have spent too long on a particular topic and suggests others so they can be more mindful online and explore new, non-harmful topics.

This is just one example of a string of social media safety measures introduced in recent years to make the online world a safer place. However, with online grooming crimes rising by more than 80% in 4 years, 47% of teens say they have seen content online they wish they hadn’t, and one in five children between 10 and 15 years old experiencing cyberbullying, both parents and online users are calling out for more to be done.

 

Recent social media safety measures fall short

Social media sites have made various attempts at making their profiles safer but as the following demonstrate, many show the improvements are only surface deep and do not tackle the real problem.

Earlier in the year, TikTok announced a ‘bedtime block’ for teens aged between 13 and 17. This feature stops push notifications during bedtime hours with the idea of creating a healthier bedtime routine. However, this feature becomes redundant if a child has created their profile with an adult’s date of birth.

YouTube has turned off auto play for teen users to stop teens spending harmful amounts of time chain-watching videos. However, if the account is not supervised by an adult, auto play can easily be turned back on.

Snapchat announced new safety features aimed at making it harder for adults to add children they’re not directly connected with. Children aged 13-17 will not appear in the ‘quick add’ section. However, it is unclear if a child has a lot of mutual friends with an adult, whether the feature would be overridden.


 

But more needs to be done!

These social media safety measures are a step in the right direction but a sure sign that more needs to be done.

The Governments wants the UK to be “the safest place online” but it is undeniable that there is a long road ahead.

As Andy Burrows, Head of Child Safety Online Policy at NSPCC, explains “There is no commercial drive or legal imperative to incentivise companies to make their platforms safer other than a sense of doing the right thing, and time and time again, that simply hasn’t been enough. Regulation is needed because the tech companies won’t do this themselves…and the regulatory powers have to give Ofcom the teeth that it needs to be able to take on this issue.”

A very recent example of this is the ex-kickboxer and ‘influencer’, Andrew Tate. Tate used his social media accounts to produce violent, misogynistic and homophobic content for years until social media sites removed him at the beginning of August this year.

In July, he was Googled more than Donald Trump and his videos racked up more than 11.6 billion views before being taken down. The harmful content he shared has no doubt left an imprint on the many young minds who have viewed his content.

Why were social media sites so slow to act when this content clearly violated their policies on ‘dangerous individuals’?

Well, the more times his content was viewed, shared, and liked, the more money social media sites made. Without a commercial drive or legal imperative, Instagram and TikTok had no real obligation to remove his accounts other than ‘doing the right thing’ for their users.

How much longer would it have taken before Tate was banned from social media sites if it wasn’t for the public’s online campaign to de-platform Tate? How many other misogynistic, violent and harmful accounts would continue to circulate and fall into the feeds of impressionable young users?

 

How can social media safety measures be improved?

Is it time for regulators to crack down on social media sites that fail to protect its users by imposing financial and legal sanctions? Or is it time for platforms to do the right thing and become aggressive at self-regulation?

If the latter, the current algorithm for prioritising trending content would likely need to change. The users’ social media feed would go back to reverse chronological order so every post gets the same length of exposure, rather than high performing posts sitting at the top of the feed. This in turn should reduce the traction from harmful pieces of content.

Social media sites must take responsibility for the content available on their channels by improving their methods of identifying and removing harmful accounts. The quicker this content is removed, the fewer the numbers of users who are exposed.

Up for question is whether social media providers should require users to submit ID before they can open up an account. This would stop users being able to hide behind a wall of anonymity where they are free to say whatever they please without consequence.

However, it is likely that enforcing ID on creation of an account would further disempower marginalised communities, like refugees and the homeless, who often lack ID and require these sites to build meaningful connections. It is also likely that it would result in fewer users due to privacy concerns and not wanting to share personal data. There is also the argument that online anonymity is required in certain countries where individuals live under oppressive governments.

 

Improving social media safety measures need to be priority

Regardless of whether it’s the regulators or social media providers that take the first step, what is clear is that something needs to change. Social media usage is only set to increase and for the harrowing statistics of online abuse to change, improvements in social media safety measures need to take priority.