For many women and girls, online spaces are too often a place where they are at risk of experiencing misogyny. Whether that takes the form of online harassment, domestic abuse intimate image abuse, or exposure to misogynistic content, there is a clear need to make the online world safer for women and girls.
In light of this, Ofcom have recently published draft guidance for organisations on the steps they can take to help achieve this goal. They’re also consulting on what else these organisations can do to prevent harm to women and girls online. This guidance covers actions such as ‘abusability testing’, technology to prevent intimate image abuse, stronger account security, and more.
From a data privacy perspective, a lack of online safety for women and girls directly threatens rights and freedoms through lived experiences of doxxing (the act of sharing someone's personal information online without their consent) and cyberstalking, exposing them to harassment and real-world harm. Weak protections also force many to self-censor or withdraw from digital spaces, highlighting the need for stronger safeguards to prevent privacy violations and online abuse.
This blog will explore the latest guidance published by Ofcom, entitled ‘A Safer Life Online for Women and Girls’, providing an overview of the steps organisations will be expected to take to ensure the internet is a safer place for females.
What Issues Do Women and Girls Face Online?
The internet can be an unsafe environment for women and girls, enabling abuse, suppressing their voices, and fostering misogynistic spaces. Ofcom research shows that women experience greater harm and have more concerns about online safety than men. This harm often results from cyber bullying, hate speech, sexual exploitation, defamation, intimate image-sharing, sextortion, revenge porn.
Further to this, UN Women report that sexual harassment and stalking are the most commonly reported forms of technology-facilitated violence experienced by women and girls. Women typically suffer harassment on digital platforms in comment sections, being recipients of explicit messages and finding themselves tracked through GPS and location-based apps. This concoction of privacy violations creates the all-too-common scenario in which the rights and freedoms of women and girls are placed in jeopardy.
According to a study undertaken by Amnesty International in 2017, 26% of women who’d experienced abuse or harassment across all countries surveyed said personal or identifying details of them had been shared online, and over half (59%) of women who’d experienced abuse or harassment online said it came from complete strangers.
This creates a reality in which, according to one global study, 58% of girls and young women have experienced some form of online harassment. The impact of this is that, based on Ofcom’s Online Experience Tracker, women are less likely than men to think that the benefits of the online world outweigh the risks (65% vs 70%) and are less likely to believe that the internet is a good thing for society (34% vs. 47%).
How Can Organisations Tackle Online Misogyny and Related Issues?
The Online Safety Act
The Online Safety Act aims to make online service providers – including social media, gaming services, dating apps, discussion forums and search services – legally responsible for protecting people in the UK from illegal content and content harmful to children, including harms that disproportionately affect women and girls.
The Act became law in 2023, with duties for platforms coming into effect in 2025. Once these requirements take effect, Ofcom will have enforcement powers to initiate investigations. Where it is found that an online service provider has contravened its obligations, Ofcom will have the power to impose a penalty of up to 10% of qualifying worldwide revenue or £18 million (whichever is the greater) and require remedial action to be taken.
The purpose of the legislation is best summarised in the act itself, where it states:
“[This act] imposes duties which, in broad terms, require providers of services regulated by this Act to identify, mitigate and manage the risks of harm (including risks which particularly affect individuals with a certain characteristic) from—
(i) illegal content and activity, and
(ii) content and activity that is harmful to children, and
(b) confers new functions and powers on the regulator, OFCOM.”
It is this last clause that leads to this new guidance from Ofcom, where they have been tasked with proposing measures that would encourage tech firms to tackle online harms against women and girls.
A Safer Life Online for Women and Girls (Ofcom Draft Guidance)
Ofcom’s draft guidance, A Safer Life Online for Women and Girls, based on insights from victims, survivors, women’s advocacy groups and safety experts, sets out a control framework that providers can implement to improve women’s and girls’ safety. It focuses on addressing four key issues:
- Online misogyny – content that actively encourages or cements misogynistic ideas or behaviours, including through the normalisation of sexual violence.
- Pile-ons and online harassment – when a woman or groups of women are targeted with abuse and threats of violence. Women in public life, including journalists and politicians, are often affected.
- Online domestic abuse – the use of technology for coercive and controlling behaviour within an intimate relationship.
- Intimate image abuse – the non-consensual sharing of intimate images – including those created with AI; as well as cyber flashing – sending explicit images to someone without their consent.
The guidance identifies nine areas where online service providers should do more to improve women and girls’ online safety by taking responsibility, designing their platforms to prevent harm and support their users:
- Ensure governance and accountability processes address online-gender based harms
- Conduct risk assessments that focus on harms to women and girls
- Be transparent about women and girls’ online safety
- Conduct abusability evaluations and product testing
- Set safer defaults
- Reduce the circulation of content depicting, promoting or encouraging online gender-based harms
- Give users better control over their experiences
- Enable users who experience online gender-based harms to make reports
- Take appropriate action when online gender-based harms occur
The intention is to promote a safety-by-design approach, demonstrating how providers can embed the concerns of women and girls throughout the operation and design of their services, as well as their features and functionalities.
Ofcom are currently inviting feedback on the draft guidance, as well as further evidence on any additional measures that could be included to address harms that disproportionately affect women and girls. Responses must be submitted by 23 May 2025. Once Ofcom have examined all responses, it will publish a statement with its decisions, along with final guidance, later this year.
Technology firms are also expected to regularly assess new or emerging threats, with the expectation that Ofcom will report on how well they have tackled harms to women and girls around 18 months after the final guidance comes into effect.
What Immediate Steps Can Technology Providers Take?
Online service providers can begin taking pragmatic steps to align internal governance and process to the principles set out in Ofcom’s guidance, whilst bearing in mind that it is not yet a finalised version. That said, it provides a good indication of the direction of travel and should be considered alongside the existing requirements of the Online Safety Act, which comes into effect this year.
The guidance itself sets out nine high-level foundational steps, in addition to nine high-level good practice steps. If implemented, the foundational steps would demonstrate that online service providers are complying with relevant duties, while the good practice steps outline additional ways providers could build on the foundational steps to further improve women and girl’s online safety.
The following are tangible steps, and demonstrations of good industry practice, that service providers can take to ensure better protections:
- ‘Abusability’ testing to identify how a service or feature could be exploited by a malicious user;
- Technology to prevent intimate image abuse, such as identifying and removing non-consensual images based on databases;
- User prompts asking them to reconsider before posting harmful material – including detected misogyny, nudity or content depicting illegal gendered abuse and violence;
- Easier account controls, such as bundling default settings to make it easier for women experiencing pile-ons to protect their accounts;
- Visibility settings, allowing users to delete or change the visibility of their content, including material they uploaded in the past;
- Strengthening account security, for example, using more authentication steps, making it harder for perpetrators to monitor accounts without the owner’s consent;
- Removing geolocation by default, because this information leaking can lead to serious harms, stalking or threats to life;
- Training moderation teams to deal with online domestic abuse;
- Reporting tools that are accessible and support users who experience harm;
- User surveys to better understand people’s preferences and experiences of risk, and how best to support them; and
- More transparency, including publishing information about the prevalence of different forms of harms, user reporting and outcomes.
How Does This Impact Data Privacy?
More than a third of women worldwide have experienced abuse online, and this figure rises to almost 50% for younger women. Further to this, women with a high public profile, for example climate activists, journalists and politicians, are often victims of cyber harassment or abuse. It is therefore an absolute necessity that proactive steps are taken by platforms to prevent harassment, doxxing and in, extreme cases, cyber stalking – all of which would be considered as causing significant harm from a data privacy perspective.
The Online Safety Act has received criticism for several reasons, including its potential impact on freedom of speech, censorship of protected speech, and the potential pitfalls in outsourcing of decisions around ‘illegality’ to online service providers. There is also the issue of the power imbalance posed by technology platforms, whose business models are ultimately based on advertising and monetising user engagement, and the potential weaking of encryption and anonymity online.
Shoosmiths’ review of the Online Safety Act articulates the specific data privacy concerns introduced by its application, including risks around age verification and use of automated decision making for content moderation. However, technology providers must start taking steps to reduce online abuse and ultimately increase safety, for all, but especially for women and girls, who are statistically the most frequent victims.
On balance, if this guidance is followed and controls implemented, its aims and intended purpose is to increase accountability, both for online service providers and the individuals who are ultimately subjecting their victims to abuse. Unfortunately, the guidance is exactly that – guidance. While Ofcom have enforcement powers, an organisation cannot be faced with legal proceedings if it doesn’t take the recommended steps.
Ofcom now have a task to get the big technology players around the table, to garner their buy-in and ensure they are demonstrating positive action in this space. Technology providers and consumers will also be critical in ensuring they hold themselves and the platforms that they utilise on a daily basis to account.