Everything the right - and the left – are getting wrong about the Online Safety Act | George Billinge

14 hours ago 9

Last week, the UK’s Online Safety Act came into force. It’s fair to say it hasn’t been smooth sailing. Donald Trump’s allies have dubbed it the “UK’s online censorship law”, and the technology secretary, Peter Kyle, added fuel to the fire by claiming that Nigel Farage’s opposition to the act put him “on the side” of Jimmy Savile.

Disdain from the right isn’t surprising. After all, tech companies will now have to assess the risk their platforms pose of disseminating the kind of racist misinformation that fuelled last year’s summer riots. What has particularly struck me, though, is the backlash from progressive quarters. Online outlet Novara Media published an interview claiming the Online Safety Act compromises children’s safety. Politics Joe joked that the act involves “banning Pornhub”. New YouGov polling shows that Labour voters are even less likely to support age verification on porn websites than Conservative or Liberal Democrat voters.

I helped draft Ofcom’s regulatory guidance setting out how platforms should comply with the act’s requirements on age verification. Because of the scope of the act and the absence of a desire to force tech platforms to adopt specific technologies, this guidance was broad and principles-based – if the regulator prescribed specific measures, it would be accused of authoritarianism. Taking a principles-based approach is more sensible and future proof, but does allow tech companies to interpret the regulation poorly.

Despite these challenges, I am supportive of the principles of the act. As someone with progressive politics, I have always been deeply concerned about the impact of an unregulated online world. Bad news abounds: X allowing racist misinformation to spread in the name of “free speech”; and children being radicalised or being targeted by online sexual extortion. It was clear to me that these regulations would start to move us away from a world in which tech billionaires could dress up self-serving libertarianism as lofty ideals.

Instead, a culture war has erupted that is laden with misunderstanding, with every poor decision made by tech platforms being blamed on regulation. This strikes me as incredibly convenient for tech companies seeking to avoid accountability.

So what does the act actually do? In short, it requires online services to assess the risk of harm – whether illegal content such as child sexual abuse material, or, in the case of services accessed by children, content such as porn or suicide promotion – and implement proportionate systems to reduce those risks.

It’s also worth being clear about what isn’t new. Tech companies have been moderating speech and taking down content they don’t want on their platforms for years. However, they have done so based on opaque internal business priorities, rather than in response to proactive risk assessments.

Let’s look at some examples. After the Christchurch terror attack in New Zealand, which was broadcast in a 17-minute Facebook Live post and shared widely by white supremacists, Facebook trained its AI to block violent live streams. More recently, after Trump’s election, Meta overhauled its approach to content moderation and removed factchecking in the US, a move which its own oversight board has criticised as being too hasty.

Rather than making decisions to remove content reactively, or in order to appease politicians, tech companies will now need to demonstrate they have taken reasonable steps to prevent this content from appearing in the first place. The act isn’t about “catching baddies”, or taking down specific pieces of content. Where censorship has happened, such as the suppression of pro-Palestine speech, this has been taking place long before the implementation of the Online Safety Act. Where public interest content is being blocked as a result of the act, we should be interrogating platforms’ risk assessments and decision-making processes, rather than repealing the legislation. Ofcom’s new transparency powers make this achievable in a way that wasn’t possible before.

Yes, there are some flaws with the act, and teething issues will persist. As someone who worked on Ofcom’s guidance on age verification, even I am slightly confused by the way Spotify is checking users’ ages. The widespread adoption of VPNs to circumvent age checks on porn sites is clearly something to think about carefully. Where should age assurance be implemented in a user journey? And who should be responsible for informing the public that many age assurance technologies delete all of their personal data after their age is confirmed, while some VPN providers sell their information to data brokers? But the response to these issues shouldn’t be to repeal the Online Safety Act: it should be for platforms to hone their approach.

There is an argument that the problem ultimately lies with the business models of the tech industry, and that this kind of legislation will never be able to truly tackle that. The academic Shoshana Zuboff calls this “surveillance capitalism”: tech companies get us hooked through addictive design and extract huge amounts of our personal data in order to sell us hyper-targeted ads. The result is a society characterised by atomisation, alienation and the erosion of our attention spans. Because the easiest way to get us hooked is to show us extreme content, children are directed from fitness influencers to content promoting disordered eating. Add to this the fact that platforms are designed to make people expand their networks and spend as much time on them as possible, and you have a recipe for disaster.

Again, it’s a worthy critique. But we live in a world where American tech companies hold more power than many nation states – and they have a president in the White House willing to start trade wars to defend their interests.

So yes, let’s look at drafting regulation that addresses addictive algorithms and support alternative business models for tech platforms, such as data cooperatives. Let’s continue to explore how best to provide children with age-appropriate experiences online, and think about how to get age verification right.

But while we’re working on that, really serious harms are taking place online. We now have a sophisticated regulatory framework in the UK that forces tech platforms to assess risk and allows the public to have far greater transparency over their decision-making processes. We need critical engagement with the regulation, not cynicism. Let’s not throw out the best tools we have.

  • George Billinge is a former Ofcom policy manager and is CEO of tech consultancy Illuminate Tech

Read Entire Article
Bhayangkara | Wisata | | |