The Leaders Reshaping AI and LGBTQ+ Digital Safety
As the fight against algorithmic bias intensifies, a diverse coalition of organizations and individuals is leading the charge to create safer, more inclusive digital spaces for LGBTQ+ communities. Here are some of the most influential forces behind this movement:
GLAAD and Jenni Olson: Setting the Standard for Social Media Safety
GLAAD, the nation’s leading LGBTQ+ media advocacy organization, has become a central watchdog for digital safety. Under the direction of Jenni Olson, GLAAD’s Social Media Safety Program is holding tech giants accountable and demanding reforms. Olson, a pioneering LGBTQ+ media figure and co-founder of PlanetOut.com, spearheaded GLAAD’s Social Media Safety Index (SMSI), the first comprehensive evaluation of LGBTQ+ user safety across major platforms. “The current unregulated, unsafe landscape of social media presents real harms to LGBTQ people,” Olson warns, emphasizing GLAAD’s mission to advocate for solutions in online hate, AI bias, and data privacy. With the SMSI, Olson and her team provide a clear roadmap for platforms like Facebook, Instagram, TikTok, Twitter, and YouTube to improve their policies and practices. “We know these companies can make their products safer. With Jenni’s leadership…GLAAD will continue to tirelessly advocate for these solutions,” says GLAAD President Sarah Kate Ellis.
Forbidden Colours: A European Watchdog for LGBTIQ+ Rights
Automated content moderation presents a risk for freedom of expression, especially when the algorithms implementing content moderation policies are trained on biased datasets
Based in Brussels, Forbidden Colours is a civil society organization dedicated to defending LGBTIQ+ rights across Europe, especially in the digital realm. Their recent report, “The Impact of AI on LGBTIQ+ People: From Discrimination to Disinformation,” authored by Megan Thomas and Meredith Veit, exposes how AI-driven content moderation can multiply harmful stereotypes and silence queer voices. The organization not only monitors anti-LGBTIQ+ initiatives but also connects activists, policymakers, and tech leaders to counter discrimination. “Automated content moderation presents a risk for freedom of expression, especially when the algorithms implementing content moderation policies are trained on biased datasets,” Thomas and Veit note, highlighting the urgent need for inclusive, rights-based AI governance.
Brown and Lips: Building Safe Digital Havens
Annie Brown, the creator of the LGBTQ-focused social platform Lips, offers a grassroots alternative to mainstream networks. Launched in 2019, Lips now serves tens of thousands of users as a “little space on the internet for people to feel like they can be themselves.” Brown’s experience developing Lips gave her a unique perspective on moderation: “My conclusion I came to was that these big platforms make it seem like it’s harder than it is. They just don’t care enough is the problem.” She advocates for ethical AI built by and for marginalized communities, arguing, “With AI, it’s really about who builds the tools, and we really do have to build our own”.
Electronic Frontier Foundation (EFF): Legal Advocacy and Public Pressure
The EFF has been a vocal critic of algorithmic censorship, organizing petitions such as “Stop FacebookFrom Filth” to protest the unfair flagging of LGBTQ+ content. Their research, cited in academic studies, reveals that terms like “transgender” and “nonbinary” attract disproportionate moderation, often under the guise of “community standards.” The EFF’s advocacy pushes tech companies toward greater transparency and accountability, insisting that “AI models need training on multiple professed identities, and human guidance is necessary to distinguish harmful from non-harmful content”.
Megan Thomas and Meredith Veit: Researching Rights at the Intersection of Tech and Advocacy
Thomas and Veit, the authors behind Forbidden Colours’ landmark AI report, bring academic rigor and activist passion to the debate. Thomas specializes in amplifying marginalized voices through qualitative research and advocacy, while Veit’s work bridges business, human rights, and technology, focusing on at-risk groups and trauma-informed reporting. Their research not only documents the harms of algorithmic bias but also proposes actionable solutions for a more just digital future.
Tech Trailblazers: LGBTQ+ Leaders in the Industry
From Aga Bojko, Head of UX Research Operations at YouTube, to Annabelle Backman, Principal Security Engineer at AWS, LGBTQ+ technologists are shaping the future of digital safety and inclusion. These leaders, recognized in lists like “30 LGBTQ+ Individuals Breaking Barriers in Tech,” are not only advancing technology but also championing diversity and equity in an industry that has long marginalized queer voices.
A Movement, Not a Moment
What unites these organizations and individuals is a refusal to accept the status quo. Whether through watchdog reports, platform innovation, legal advocacy, or research, they are building a digital world where LGBTQ+ voices are not only protected, but empowered. As Jenni Olson puts it, “GLAAD is advocating for solutions in numerous realms: online hate and harassment, AI bias, polarizing algorithms, data privacy. We’re working every day to hold platforms accountable and to secure safe online spaces for LGBTQ people”.
Spectrum for Men appreciates the efforts of all you ensure visibility a diversity.