The internet has come a long way in the past 50 or so years.
After development of the network connecting computers worldwide started in the late 1960s, in California, its popularity exploded.
It is estimated there are now around five billion users around the planet.
The internet has become a fundamental part of our lives – making it much easier to communicate, shop, socialise and work.
But the network also has downsides – people can get addicted to constantly using social media platforms such as Facebook or Twitter, and we have also seen the rise of anonymous online trolls.
Rapid development of the internet sector has left it under-regulated – and labelled the “Wild West” of modern life.
However, better days could lie ahead. Fresh measures are being added to the UK Government’s draft Online Safety Bill, which will give social media users more control over what they see online and who can interact with them.
Independent UK organisation Clean up the Internet says: “The UK has a serious problem with how conversations take place online.
“Bullying, harassment and intimidation are an everyday occurrence.
“Debates get derailed by abuse and misinformation. Many people feel uneasy about the way online conversations so often take a nasty turn, whilst others feel unable to take part at all.
“British democracy is being damaged by this toxic culture.
“Clean up the Internet wants to change this. We want an internet where everyone can explore, discuss and debate the issues they care about, without being subjected to abuse or confused by fake news.
“We want websites and social media networks to make life harder for trolls and bullies.
“Where tech companies are too slow to act, we want the government to force them.”
British democracy is being damaged by this toxic culture.”
Clean up the Internet.
Clean up the Internet welcomed last month’s new proposals in the Online Safety Bill.
Lawyer Stephen Kinsella, a founder of the organisation, said: “It does indeed appear the government has listened, adopting an approach very similar to that which we have been calling for, along with a growing number of other organisations, parliamentarians and parliamentary committees.
“We do need to wait for the detail as it emerges in the Bill, but on the basis of the announcement can make a number of comments.
“First, introducing these extra measures into the Online Safety Bill is not, and doesn’t present itself as a magic bullet.
“But it will strengthen the Bill significantly by putting some social obligations on platforms regarding how they manage a risky design feature.”
Mr Kinsella added: “The platforms’ current laissez faire approach to anonymity, designed to maximise their ad revenue and minimise their overheads, has enabled widespread misuse of anonymity, to abuse other users and to amplify misinformation.
“Giving users a ‘right to verify’ and more control over who can interact with them strikes a better balance between different users’ rights and freedoms.
“Under these latest proposals, no-one would be forced to verify their identity.
“But equally no-one would be forced to interact with anonymous accounts – given the higher level of risk of abuse, etcetera – if they don’t want to.”
He continued: “It is notable that a range of other informed commentators and experts have welcomed the announcement.
“We expect it to also prove popular with the UK general public.
“Numerous opinion polls have indicated the UK public recognise this problem, support a change of approach and would by a large majority be willing to verify their identity.
“The strength of public feeling has also been reflected in several very large petitions on the parliament website, and many parliamentarians have highlighted the number of their constituents affected by abuse by anonymous accounts.”
The government says it recognises there are too many people experiencing online abuse and there are concerns anonymity is fuelling this, with offenders having little to no fear of recrimination from either the platforms themselves or law enforcement.
It adds: “Over the past year people in the public eye, including England’s Euro 2020 footballers, have suffered horrendous racist abuse.
“Female politicians have received abhorrent death and rape threats, and there is repeated evidence of ethnic minorities and LGBTQ+ people being subject to co-ordinated harassment and trolling.”
According to digital secretary Nadine Dorries, technology firms have a responsibility to stop anonymous trolls “polluting” their platforms.
Ms Dorries said: “We have listened to calls for us to strengthen our new online safety laws and are announcing new measures to put greater power in the hands of social media users themselves.
“People will now have more control over who can contact them and be able to stop the tidal wave of hate served up to them by rogue algorithms.”
The government says the vast majority of social networks used in the UK do not require people to share any personal details – they can identify themselves by a nickname, alias or other term not linked to a legal identity.
It adds: “Removing the ability for anonymous trolls to target people on the biggest social media platforms will help tackle the issue at its root, and complement the existing duties in the Online Safety Bill and the powers the police have to tackle criminal anonymous abuse.”
Cracking down on ads
The Bill is also going to require the largest and most popular social media platforms and search engines to prevent paid-for fraudulent adverts appearing on their services.
This will improve protection for internet users from the potentially-devastating impact of fake ads – including where criminals impersonate celebrities or companies to steal people’s personal data, peddle dodgy financial investments or break into bank accounts.
Separately, the government is launching a consultation on proposals to tighten the rules for the online advertising industry.
Influencers failing to declare they are being paid to promote products on social media could also be subject to stronger penalties.
What’s in the Online Safety Bill?
The draft Online Safety Bill places requirements on companies to tackle harmful content posted anonymously on their platforms and manage the risks around the use of anonymous profiles.
This could include banning repeat offenders associated with abusive behaviour, preventing them from creating new accounts or limiting their functionality.
Under the first new duty announced at the end of last month, companies with the largest number of users and highest reach – and thus posing the greatest risk – must offer ways for their users to verify their identities and control who can interact with them.
This could include giving users options to tick a box in their settings to receive direct messages and replies only from verified accounts.
The onus will be on the platforms to decide which methods to use to fulfil this identity-verification duty, but they must give users the option to opt in or out.
When it comes to verifying identities, some platforms may choose to give users an option to verify their profile picture to ensure it is a true likeness.
Or they could use two-factor authentication where a platform sends a prompt to a user’s mobile number for them to verify.
Alternatively, verification could include people using a government-issued ID, such as a passport, to create or update an account.
But the government says banning anonymity entirely would negatively affect those who have positive online experiences or use it for their personal safety – such as domestic abuse victims, activists living in authoritarian countries or young people exploring their sexuality.
The new duty aims to provide a better balance between empowering and protecting adults, particularly the vulnerable, while safeguarding freedom of expression online because it will not require any legal free speech to be removed.
While this may not prevent anonymous trolls posting abusive content in the first place, if it is legal and does not contravene the platform’s terms and conditions, it will stop victims being exposed to it and give them more control over their online experience.
Users who see abuse will be able to report it, and the government says the Bill will significantly strengthen the reporting mechanisms companies have in place for inappropriate, bullying and harmful content, and ensure they have clear policies and performance metrics for tackling it.
The Bill, which was introduced in Parliament on March 17, would also force companies to remove illegal content such as child sexual abuse imagery, the promotion of suicide, hate crimes and incitement to terrorism.
There is a growing list of toxic content and behaviour on social media which falls below the threshold of a criminal offence, but which still causes significant harm.
This includes racist abuse, the promotion of self-harm and eating disorders, and dangerous anti-vaccine disinformation.
Power of algorithms
The government says much of this is already expressly forbidden in social networks’ terms and conditions, but too often it is allowed to stay up and is actively promoted to people via algorithms.
Under the second new duty, companies will have to make tools available for their adult users to choose whether they want to be exposed to any legal but harmful content where it is tolerated on a platform.
These tools could include new settings and functions which prevent users receiving recommendations about certain topics or place sensitivity screens over that content.