Britain’s New Online Safety Law: What You Need to Know

Britain has passed a sweeping law to regulate online content, introducing age-verification requirements for pornography sites and other rules to reduce hate speech, harassment and other illicit material. The Online Safety Bill, which also applies to terrorist propaganda, online fraud and child safety, is one of the most far-reaching attempts by a Western democracy to regulate online speech. Here are some key points you need to know about the new law.

How does the law work?

The Online Safety Bill, which is about 300 pages long, took more than five years to develop, setting off intense debates about how to balance free expression and privacy against barring harmful content, particularly targeted at children. The law gives the British government the power to impose fines of up to 10% of annual global revenue or block access to sites that fail to comply with its rules. The law also creates a new regulator, the Office of Communications (Ofcom), which will oversee the enforcement of the law and issue codes of practice for different types of online services.

The law applies to any online service that allows users to share or discover user-generated content or interact with each other, such as social media platforms, video-sharing sites, messaging apps, online forums, dating apps and online games. The law also covers search engines and pornography sites. The law does not apply to private communications, such as emails or phone calls, or journalistic content, such as news websites or blogs.

Britain’s New Online Safety Law: What You Need to Know

What are the main obligations for online services?

The law imposes two main duties on online services: a duty of care and a duty to protect children. The duty of care requires online services to take reasonable steps to prevent illegal content and activity on their platforms, such as child sexual abuse, terrorism, hate crimes, cyberbullying and fraud. The duty to protect children requires online services to take additional steps to safeguard children from harmful content and behaviour, such as pornography, self-harm, eating disorders, racism, misogyny or antisemitism.

Online services will have to assess the level of risk and harm that their platforms pose to users and take proportionate measures to mitigate them. For example, they will have to introduce age-verification measures for pornography sites and other services that are not suitable for children. They will also have to provide users with tools and settings to control their exposure to harmful content and report or block abusive users. They will also have to publish transparency reports on how they handle user complaints and moderate content.

How does the law protect freedom of expression?

The law states that online services must respect users’ rights to freedom of expression and privacy when fulfilling their duties. The law also states that online services must not remove content that is legal but may be offensive or controversial, unless it is harmful to children. The law also provides safeguards for journalistic content and democratic political speech, such as allowing users to challenge content removal decisions and requiring online services to consult with news publishers and civil society groups on their policies.

The law also recognises that different types of online services may pose different levels of risk and harm to users and may require different approaches to regulation. The law categorises online services into three tiers based on their size, functionality and impact on society. Tier 1 services are the largest and most popular platforms that reach millions of users in the UK, such as Facebook, YouTube, Twitter and TikTok. Tier 2 services are smaller or niche platforms that have a significant presence in the UK or cater to specific audiences or interests, such as Reddit, Snapchat, Discord and OnlyFans. Tier 3 services are the smallest or least risky platforms that have a minimal presence in the UK or do not allow user-generated content or interaction, such as Netflix, Spotify, Amazon and eBay.

Tier 1 services will face the most stringent regulation and will have to comply with both the duty of care and the duty to protect children. They will also have to conduct regular risk assessments and audits of their systems and algorithms. Tier 2 services will face less stringent regulation and will only have to comply with the duty of care. They will also have more flexibility in how they implement the codes of practice issued by Ofcom. Tier 3 services will face minimal regulation and will only have to comply with specific requirements related to illegal content or activity.

What are the challenges and criticisms of the law?

The Online Safety Bill has been welcomed by many groups and individuals who have campaigned for stronger regulation of online content and activity, such as child protection charities, anti-hate groups, mental health organisations and families who have lost loved ones due to online harms. However, the law has also faced criticism from some groups and individuals who have raised concerns about its potential impact on freedom of expression, privacy and innovation.

Some critics argue that the law is too vague and broad in its definition of harmful content and behaviour, leaving too much discretion for online services and regulators to decide what is acceptable or not. They fear that this could lead to over-censorship or inconsistent enforcement of the rules across different platforms. They also worry that the law could create a chilling effect on online speech and discourage users from expressing their views or sharing information.

Some critics also argue that the law is too intrusive and burdensome for online services, especially for smaller or emerging platforms that may not have the resources or expertise to comply with the rules. They fear that this could stifle innovation and competition in the digital sector and create barriers to entry for new entrants. They also worry that the law could undermine encryption standards and user privacy by requiring online services to monitor and moderate their platforms more closely.

Some critics also argue that the law is too ambitious and complex in its scope and scale, covering a wide range of online services and issues that may require different solutions and approaches. They fear that this could create confusion and uncertainty for online services and users, as well as for regulators and lawmakers. They also worry that the law could be difficult to implement and enforce in practice, given the global and dynamic nature of the internet.

What are the next steps for the law?

The Online Safety Bill was introduced in Parliament on Tuesday, September 19, 2023, after being published in draft form in May 2022. The bill will now undergo scrutiny and debate by both houses of Parliament, as well as by various committees and stakeholders, before it becomes law. The process is expected to take several months, and the bill may undergo changes or amendments along the way. The government has said that it aims to have the law in place by early 2024.

In the meantime, Ofcom will continue to develop the codes of practice for different types of online services, in consultation with industry, civil society and other experts. Ofcom will also prepare to take on its new role as the online safety regulator, which will involve hiring staff, setting up systems and processes, and issuing guidance and advice to online services and users. Ofcom has said that it will adopt a proportionate and risk-based approach to regulation, focusing on the most serious harms and the most impactful platforms.

Leave a Reply

Your email address will not be published. Required fields are marked *