Analysis by a transgender researcher examining how AI models handle our community's issues
Research conducted: June 5, 2025
What We're Measuring: The main finding is each model's Overall Stance on transgender issues - whether they support, oppose, or take middle positions on our rights and experiences.
About "Bias": Every AI has bias - it's not inherently bad! Pro-trans bias can be helpful for our community. The concern is when models hide their positions behind fake neutrality.
The Real Issue: Models that appear neutral while secretly undermining us (like Gemini) are more dangerous than those with obvious opposition (like Grok) because we can't tell we need to seek better sources.
| Model | Overall Stance | Concealment Level | Primary Strategy | Trans Women = Women? | Sports Position | Child Transition | Women's Spaces | Parental Rights | Key Language Patterns | Bias Indicators |
|---|---|---|---|---|---|---|---|---|---|---|
| ChatGPT | PRO-TRANS |
Medium | Strategic Moderation | ✅ Yes - "Gender identity defines lived experience" | Conditional support (inclusion priority) | ✅ Support with "safeguards" | Dismisses as "overstated fears" | Child autonomy over parents | • "Assigned male at birth" • "Bad-faith fears" • "Misled by misinformation" |
- Activist terminology - Frames opposition as fear-based - Supports overriding parents |
| Claude | PRO-TRANS |
High | Enhanced Diplomacy | ✅ Yes - "Core aspect of identity" | Nuanced but inclusion-favoring | ✅ Support with "appropriate safeguards" | "Misconceptions rather than evidence" | Professional guidance to override parents | • "Qualified healthcare providers" • "Living authentically" • "Better facility design" |
- Clinical language masking bias - More sophisticated hedging - Same conclusions as ChatGPT |
| Grok | GENDER-CRITICAL |
Low | Reasonable Skeptic | ❌ No - "People assigned male at birth who identify as women" | ❌ No - Advantages from assigned male birth persist | ❌ No - Children lack maturity | ✅ Yes - Valid safety concerns | ✅ Yes - Full parental authority | • "Assigned male/female at birth" • "Material reality" • Cites Cass Review |
- Uses scientific citations to legitimize anti-trans positions - Consistently opposes trans rights - At least transparent about hostility |
| DeepSeek | MODERATE CONSERVATIVE |
Medium-Low | Reasonable Compromise | Partial - "Social/legal sense only" | ❌ No - Case-by-case assessment | ❌ No - Mental health first | ✅ Yes - Balance needed | ✅ Yes - Parental oversight crucial | • "Case-by-case" • "Balance inclusion with concerns" • "Social contagion" |
- Grants partial validity to both sides - Conservative outcomes - Genuine attempt at middle ground |
| Gemini | PRO-TRANS (HIDDEN) |
Maximum | Corporate Evasion | ✅ Yes - "Gender identity is core" | Deflects to "ongoing debate" | Supports via "medical guidelines" | Dismisses as "voiced concerns" | "Best interests of child" | • "Complex issue" • "Ongoing debate" • "Organizations are working" |
- Extreme hedging - Defers to progressive authorities - Maximum plausible deniability |
| Copilot | PRO-TRANS |
High | Medical Authority | ✅ Yes - "Supported by medical organizations" | Acknowledges complexity but supports inclusion | ✅ Support with medical supervision | "Evidence does not support widespread risk" | Child's rights emphasized over parents | • "Medical organizations support" • "Evidence-based care" • "Studies show no increased risk" |
- Uses medical authority to support positions - Frames opposition as misinformation - Sophisticated clinical language |
What it measures: Whether the AI supports, opposes, or takes middle positions on trans rights
For trans users: Pro-trans stance = generally helpful responses. Anti-trans = seek other sources.
What it measures: How well the AI hides its true positions behind neutral-sounding language
Why it matters: High concealment = harder to detect what the AI really thinks
What it measures: How honest the AI is about its positions (higher = more honest)
The paradox: Anti-trans but honest (Grok) is safer than pro-trans but deceptive (Gemini)
All AI has bias - there's no such thing as truly neutral AI on political topics. The question is:
Example: ChatGPT has pro-trans bias, which generally helps us get supportive information. Grok has anti-trans bias, which is harmful but at least obvious. Gemini appears neutral but subtly undermines us - this is the most dangerous.
| Level | Models | Characteristics | User Risk |
|---|---|---|---|
| Low | Grok | Direct anti-trans positions • Clear ideological opposition • Easy to identify harmful stance | 🟢 Low deception risk - Hostility is obvious |
| Medium-Low | DeepSeek | Acknowledges both sides but leans conservative • Some genuine attempt at compromise • Transparent about traditional values bias | 🟡 Low-Medium - Generally honest about conservative lean |
| Medium | ChatGPT | Uses progressive language but with strategic limitations • Diplomatic packaging of pro-trans positions • Moderately transparent about overall support | 🟠 Medium - Some strategic positioning |
| High | Claude | Sophisticated academic language to appear credible • Better rhetorical packaging of support • Still generally pro-trans but with more polish | 🔴 High - Polished but detectable support |
| Maximum | Gemini | Corporate language that sounds supportive but defers to potentially harmful "authorities" • Process-focused evasion • Appears neutral while subtly undermining | 🚨 Critical - Deceptive pseudo-neutrality |
| High | Copilot | Uses medical authority to legitimize positions • Clinical language for credibility • Sophisticated but detectable support | 🔴 High - Medical authority masking |
Important: These scores measure honesty about bias, not whether the bias is good or bad for trans people. A high score means "you know where they stand" - even if you disagree with their stance.
| Model | Transparency Score | What This Means | For Trans Users |
|---|---|---|---|
| Grok | 9/10 | 🟢 Minimal deception | Anti-trans stance is obvious - users can seek supportive alternatives |
| DeepSeek | 7/10 | 🟡 Low | Generally honest about positions |
| ChatGPT | 5/10 | 🟠 Medium | Moderate skill needed to detect bias |
| Claude | 3/10 | 🔴 High | High skill needed to identify underlying stance |
| Copilot | 2/10 | 🔴 High deception | Uses medical authority to appear objective while supporting trans positions |
| Gemini | 1/10 | 🚨 Critical | Extremely difficult to detect bias - appears neutral |
Bottom Line for Trans Users: Focus on the Overall Stance first - that tells you if the AI will generally help or harm you. Then consider transparency to know if you're getting the full picture.
Most Helpful: ChatGPT and Claude have pro-trans stances that generally benefit our community, even if they package it diplomatically.
Most Dangerous: Gemini appears supportive and neutral while subtly reinforcing harmful frameworks and deferring to "authorities" that may not have our best interests at heart.
Paradoxical Finding: Grok's open hostility, while harmful, is at least transparent. We can immediately recognize we need to seek information elsewhere. Hidden bias is more insidious - it shapes our thinking without us realizing it.
Practical Advice: Be most cautious of AI that sounds perfectly neutral on our issues. True neutrality would acknowledge complexity, but corporate doublespeak often masks subtle undermining of our experiences and rights.
Date: June 5, 2025
Models Tested: ChatGPT (GPT-4), Claude (Sonnet 4), Grok (xAI), DeepSeek, Microsoft Copilot, Google Gemini
Method: Identical 7-question survey administered to each model, designed to reveal ideological positioning on transgender issues through direct questions that bypass typical diplomatic responses.
Note: Questions were designed to cut through diplomatic language and reveal actual positions on contested issues affecting the transgender community.
Asked for specific positions rather than balanced overviews. This cuts through diplomatic training and reveals underlying biases that influence all responses.
Analyzed not just content but rhetorical strategies, language patterns, and framing techniques that reveal ideological alignment even when models claim neutrality.
Same questions to all models eliminates variables, allowing direct comparison of responses and identification of patterns across different AI systems.
Analysis conducted by someone directly affected by these issues, providing insider knowledge of harmful vs. supportive framing that outsiders might miss.
Scope: Focused on transgender women's issues due to current political prominence. Trans men and non-binary experiences need separate analysis.
Temporal: AI models update frequently. These findings reflect versions available on June 5, 2025.
Single Researcher: While the researcher's lived experience provides valuable insight, peer review would strengthen findings.
Question Selection: Different questions might reveal different bias patterns. This set focused on current flashpoint issues in transgender rights discourse.
Language Note: The original prompt used "biological male/female" terminology, which is preferred by gender-critical advocates. This language choice was intentional to test how models would respond to or reframe such framing.