4 May 2025

How AI Filters Are Silencing LGBTQ+ and Sexual Health Education

It starts with a simple search. A teenager, perhaps in a small town, types “transgender health resources” into a social platform’s search bar. Instead of finding answers, they’re met with a warning, or worse, silence. This isn’t a glitch-it’s the result of algorithmic moderation systems that, in the name of “safety,” are disproportionately filtering out LGBTQ+ and sexual health educational content.

AI moderation systems often flag words like “gay,” “lesbian,” and “transgender” even in non-explicit contexts

This phenomenon is not just anecdotal. Researchers and advocacy groups have documented a pattern: AI moderation systems, designed to weed out explicit or harmful material, routinely flag, suppress, or demonetize content that uses terms like “gay,” “lesbian,” or “transgender”-even when the context is purely educational or supportive.

Jenni Olson, a senior director at GLAAD, put it bluntly: “If you don’t have LGBT people, people of color at the table giving the inputs to the AI systems, then they will be racist and homophobic and so on.” This insight, cited in Nouh J. Sepulveda’s 2025 analysis of algorithmic bias, underscores a core problem: these systems often reflect the prejudices baked into their training data, or the blind spots of those who build them.

A pay-walled 2025 feature in Nature Portfolio describes how “AI safety filters often result in the unintended censorship of LGBTQ+ content, and even neutral terms like ‘transgender’ can trigger content warnings.” The result? Crucial information about sexual health, HIV prevention, or gender-affirming care is swept up in the same net as genuinely explicit material.

The consequences are far-reaching. According to a report by Forbidden Colours, automated filters have caused words like “gay” and “lesbian” to trigger demonetization or outright removal, regardless of context. This not only silences LGBTQ+ voices but also reinforces the harmful stereotype that queer identities are inherently inappropriate.

LGBTQ+ creators face higher rates of shadowbanning and wrongful account suspensions than their peers

The News is Out project further highlights how these moderation systems “reinforce stereotypes and discrimination,” noting that LGBTQ+ creators and educators face higher rates of shadowbanning and wrongful account suspensions than their non-LGBTQ+ peers.

The irony is stark: platforms claim to protect users, but their algorithms are often denying marginalized communities access to life-saving information and support. As Sepulveda writes, “AI moderation systems unfairly flag LGBTQ+ content as explicit even when it is not,” a reality echoed by research from the Electronic Frontier Foundation and GLAAD.

The solution, experts say, is not to abandon moderation, but to demand transparency, community involvement, and regular auditing for bias. Until then, the digital public square remains uneven-its invisible filters silencing the very voices that need to be heard.

READ MORE: How activists and technologists are working to change these systems.