4 May 2025

How Activists and Technologists Are Rewriting the Rules of AI Moderation

When queer creators and sexual health educators find their content flagged, demonetized, or outright erased by social media algorithms, it’s not just a technical glitch-it’s a systemic problem. As Jenni Olson of GLAAD has pointed out, “If you don’t have LGBT people, people of color at the table giving the inputs to the AI systems, then they will be racist and homophobic and so on.” The question is: Who is fighting back, and how are they changing the system?

Across the tech world, a coalition of activists, researchers, and forward-thinking technologists is working to expose and correct the bias that plagues automated moderation. Their efforts are as diverse as the communities they represent, but they share a common goal: making digital spaces safer and more inclusive for everyone.

We essentially bring marginalized communities into the machine learning pipeline, because that’s what’s missing from this AI.

One approach gaining traction is community-driven AI development. As Brown, founder of the tech company Reliabl, explains, “We essentially bring marginalized communities into the machine learning pipeline, because that’s what’s missing from this AI.” By involving LGBTQ+ people and other underrepresented groups in the very design and training of moderation systems, these technologists hope to teach algorithms the difference between hate speech and vital, affirming information.

Oversight and accountability are also on the rise. Independent bodies like the Oversight Board have begun to scrutinize how platforms such as Meta deploy AI for content moderation. Their mission, as described in recent reports, is to “embed freedom of expression and human rights into these tools early and by design,” while pushing for transparency and third-party access to moderation data.

All these moderation algorithms are written primarily by white, straight cis men. And so that’s the perspective that they’re going to have.

Experts stress the importance of regular bias audits and inclusive data practices. Tools like Google’s What-If Tool allow developers to analyze and correct bias in machine learning models. But as Brown notes, “all these moderation algorithms are written primarily by white, straight cis men. And so that’s the perspective that they’re going to have. And then they wonder why they can’t work in different contexts. So there’s a laziness factor there.” The antidote, advocates argue, is intentional inclusion and rigorous oversight.

Grassroots advocacy is another powerful force. Organizations such as Fight for the Future and Forbidden Colours are building coalitions to demand stronger regulations and explicit inclusion of LGBTQ+ experts in sensitivity checks. They call for clear grievance mechanisms for those harmed by AI-powered technologies and legal requirements for platforms to cooperate with civil society.

Education and awareness are foundational. By shining a light on algorithmic bias-through research, media coverage, and public campaigns-activists are ensuring that the problem can no longer be ignored. As one Nature Portfolio feature noted, “AI safety filters often result in the unintended censorship of LGBTQ+ content, and even neutral terms like ‘transgender’ can trigger content warnings.” The more people know, the harder it is for platforms to hide behind opaque policies.

Finally, some technologists are harnessing AI’s power for good, training new models to identify and remove hate speech targeting LGBTQ+ users while protecting affirming and educational content. Collaborations between groups like Element AI and Amnesty International have demonstrated that, with the right data and intent, AI can be part of the solution as well as the problem.

The fight is far from over, but the message is clear: the future of content moderation must be built by and for the communities it serves. As activists and technologists continue to rewrite the rules, they are proving that algorithms don’t have to be biased-they just have to be taught better.

READ MORE: Would you like to read profiles of organizations and individuals leading this change?

Feedback