
Social media platforms have long offered refuge to queer-trans individuals from the alienation, bullying, and discomfort that they might face in their IRL worlds. However, inadequate content moderation policies adopted by social media platforms leave this group of users vulnerable to hate speech, harassment, and targeted disinformation.
Meta’s recent updates to its content moderation guidelines, combined with ongoing failures by platforms like TikTok and X (formerly Twitter), highlight the urgent need for inclusive and proactive moderation policies.
Meta’s New Guidelines: A Backward Step?
Recently, Meta unveiled sweeping changes to its content moderation policies under the guise of recommitting to “free expression.” These included:
1. Ending its third-party fact-checking program in the United States, replacing it with a “Community Notes”, a system similar to the one employed by X.
2. Rolling back restrictions on sensitive topics like immigration, gender identity, and gender, which Meta deemed overly restrictive.
3. Shifting enforcement to rely less on automated systems and more on user-reported violations.
Joel Kaplan, Meta’s Chief Global Affairs Officer, framed these changes as a remedy to overreach and as a reaffirmation of the company’s commitment to free expression. However, the implications for marginalized groups are concerning.
In response, GLAAD President and CEO Sarah Kate Ellis criticized the changes, stating:
“Zuckerberg’s removal of fact-checking programs and industry-standard hate speech policies make Meta’s platforms unsafe places for users and advertisers alike. Without these necessary hate speech and other policies, Meta is giving the green light for people to target LGBTQ people, women, immigrants, and other marginalized groups with violence, vitriol, and dehumanizing narratives.”
This rollback prioritizes a hands-off approach under the guise of neutrality, ignoring the inherent power imbalances that exist in digital spaces. Without robust safeguards, queer-trans individuals are left more exposed to targeted harassment, disinformation, and harmful narratives.
Similar Failures on TikTok and X
Meta’s shortcomings echo broader trends across other social media platforms, reflecting systemic failures in content moderation. TikTok, for instance, has faced ongoing criticism for enabling the spread of anti-LGBTQ+ content. A report by them. highlighted how discriminatory rhetoric and coded hate speech often slip through the platform’s filters, fostering an unsafe environment for queer and trans users.
This aligns with findings from a recent Media Matters study, which revealed that TikTok’s recommendation algorithm actively amplifies homophobic and anti-trans content. The study documents how users engaging with neutral or LGBTQIA+ content are quickly funneled toward posts promoting anti-queer rhetoric and, in some cases, outright calls for violence. By failing to address these algorithmic biases, TikTok perpetuates a hostile digital space for queer communities, mirroring the issues seen on X (formerly Twitter) and other platforms struggling with inadequate content moderation practices.
X’s 2024 transparency report too reveals a troubling disconnect between the volume of harmful content flagged by users and the platform’s enforcement actions. Despite over 224 million reports, suspensions have barely increased since 2021. Enforcement against hate speech has dropped significantly, with only 2,361 suspensions for hateful conduct in 2024—down from 104,000 in 2021—reflecting policy changes that no longer treat misgendering and deadnaming as violations.
As highlighted in a recent LinkedIn post by CONTIO Tech, X’s growing reliance on AI for moderation is falling short. The AI systems struggle to detect nuanced hate speech, often missing coded language or misinterpreting context. With limited human oversight, this gap has contributed to rising anti-queer and anti-trans rhetoric on the platform.
The broader trend across social media suggests that marginalized users’ safety continues to be sidelined, raising concerns about the long-term impact of insufficient moderation practices.
For queer-trans communities, social media is often a rare space where they can safely explore and express their identities. It is a site of connection, advocacy, and empowerment. When platforms fail to address hateful content, the consequences are severe:
Exacerbation of Marginalization: Hateful narratives and harassment drive queer-trans users away from these spaces, increasing feelings of isolation and vulnerability.
Normalization of Anti-Queer Rhetoric: Allowing unchecked hate speech creates a perception that such views are acceptable, emboldening further hostility both online and offline.
Erosion of Trust: Users lose faith in platforms that fail to prioritize their safety, undermining their role as inclusive public forums.
Declining Public Support for LGBTQ+ Rights
While public support for same-sex marriage and nondiscrimination protections for LGBTQ+ Americans remains relatively high, a recent survey by the Public Religion Research Institute (PRRI) in the US, as discussed in The Guardian, reveals troubling declines between 2022 and 2023. Specifically:
1. Support for same-sex marriage fell from 69% to 67%.
2. Nondiscrimination protections in employment, housing, and public spaces dropped from 80% to 76%.
3. Opposition to businesses refusing services to LGBTQ+ individuals on religious grounds decreased from 65% to 60%.
These shifts, driven by changing attitudes among conservative groups, are a clear reflection of broader societal polarization. The rise of nationalist ideologies, often opposed to LGBTQ+ protections, further compounds the challenge of ensuring equitable rights.
The Role of Social Media in Amplifying Harm
As public support for LGBTQ+ rights wanes, legal and political efforts to roll back these rights have intensified, including bans on gender-affirming care and restrictions on gender expression in schools. Social media platforms, which should offer refuge and empowerment to LGBTQ+ individuals, often mirror and amplify these societal tensions. Unfortunately, many platforms, including Meta, fail to provide adequate content moderation, which allows discriminatory content to thrive unchecked.
A recent Human Rights Watch report highlights how Meta’s platforms, such as Facebook and Instagram, often fall short in protecting LGBTQ+ users, particularly in regions where laws and cultural norms criminalize LGBTQ+ identities. Meta’s over-reliance on automation and the lack of sufficient human content moderators who understand the regional and linguistic contexts, particularly in the MENA region, has led to a failure to remove harmful content promptly or accurately. This includes the improper removal of empowering content posted by LGBTQ+ individuals, such as self-referential use of hate speech terms meant to raise awareness. Moreover, Meta’s inadequate moderation systems, which struggle with dialects and regional language nuances, contribute to a broader environment of vulnerability for LGBTQ+ users.
The PRRI survey shows that more than one in ten Americans identify as LGBTQ+, with 22% of those being under 30. For these individuals, social media platforms are crucial, serving as spaces for connection and identity validation. But as the level of online hate escalates, these platforms increasingly pose a risk rather than a sanctuary. The lack of sufficient content moderation allows discriminatory narratives to spread freely, further threatening the safety of already marginalized individuals.
A Call for Inclusive and Effective Moderation Policies
To foster genuinely inclusive and safe digital spaces, social media platforms must take decisive, proactive measures to protect LGBTQ+ users globally from harm. As online hate continues to intersect with broader societal polarization, the need for stronger content moderation is more urgent than ever. This requires a multifaceted approach that addresses both systemic gaps and the lived realities of marginalized communities.
Key Actions for Platforms:
1. Strengthen Hate Speech Policies: Implement transparent, comprehensive guidelines that explicitly target harmful content aimed at LGBTQ+ individuals. Hate speech, incitement to violence, and harassment must be swiftly identified and removed.
2. Enhance AI Moderation with Human Oversight: Platforms should invest in advanced AI systems capable of detecting nuanced, context-specific hate speech, while ensuring human moderators provide critical oversight to address gaps in AI understanding.
3. Engage Marginalized Communities: Content policies must reflect the experiences of queer and trans individuals. Platforms must actively collaborate with LGBTQ+ organizations and digital rights advocates to shape inclusive policies and ensure their enforcement is sensitive to community needs.
4. Monitor and Address Discriminatory Narratives: Recognizing that online hate fuels real-world harm, platforms must proactively target content that perpetuates harmful stereotypes. This involves more than content removal—it demands understanding how digital spaces shape public attitudes and contribute to broader societal discrimination.
5. Invest in Regional Expertise: Moderation efforts must be tailored to reflect cultural and linguistic nuances. By engaging local activists and utilizing resources such as the Arabic Queer Hate Speech Lexicon, platforms like Meta can better protect LGBTQ+ users in regions where they face heightened risks.
6. Commit to Transparency: Platforms must regularly publish detailed reports on enforcement actions, mistakes, and the direct impact on marginalized users. Transparency ensures accountability and drives continuous improvement in moderation practices.
By addressing these critical gaps, platforms can move beyond performative gestures and demonstrate a genuine commitment to protecting marginalized communities. Advocacy and accountability remain essential in ensuring that digital spaces—where LGBTQ+ users find community, build solidarity, and advocate for their rights—continue to thrive. The stakes are too high to allow harmful rhetoric and disinformation to persist unchecked.