Blog

article

Is your AI an LGBTQ+ ally? 

Tips for preventing algorithmic LGBTQ+ bias in your brand’s artificial intelligence

Miriam Vogel

June 30, 20224 minutes

EqualAI and being an LGBTQ+ ally

Miriam Vogel is the President and CEO of EqualAI, an organization co-founded by LivePerson CEO Rob LoCascio to lead the movement for innovative, responsible, and inclusive artificial intelligence.


Our enthusiasm for the potential of artificial intelligence (AI) technologies, as well as our fears for its misapplications, plays a particularly important role when it comes to our LGBTQ+ community support. This Pride month, as LGBTQ+ rights have come under attack in legislatures, courts, and streets across the country, it is a critical time to consider how AI applications can have unique, unintended consequences for LGBTQ+ individuals and communities, and how we can remedy such harms.

Let’s start with the basic components of AI and machine learning to understand the challenges in its application to LGBTQ+ individuals:

  • How does “other” fit into the binary code of 0 and 1?
  • How do we ensure fairness to a human being who identifies as LGBTQ+ when datasets and algorithms either do not or cannot identify LGBTQ+ individuals?

As a result of these challenges, ensuring equal access to the benefits of AI and protecting these individuals and communities from algorithmic bias presents increased obstacles. Our celebration of Pride month is a helpful reminder to ensure that our discussions of algorithmic challenges and AI research opportunities include considerations of particular consequence to LGBTQ+ communities.

Here we present three ways you can be an LGBTQ+ ally in this effort:

1. Use caution with AI programs claiming to identify gender or sexuality

Applications of AI can be fundamentally discriminatory against gender identity and sexual orientation. In addition to the well-known harms presented in hiring systems, Automated Gender Recognition (AGR) has been used to identify a person’s gender using facial recognition. AGR technology carries a disproportionate risk to trans-identifying and non-binary people, given its ability to “out” or mischaracterize them without their knowledge or input. 

Another contested use of AI has erupted in response to a Stanford University study arguing that AI can accurately guess an individual’s sexual orientation from facial features. Not only does this application present challenges in its design based on broad and potentially inaccurate assumptions underlying its conclusions, it is hard to imagine legitimate uses of such a program. Such questionable uses of AI illustrate the necessity of always including an analysis of whether an AI program should be employed in the first place.


2. Examine underlying causes of algorithmic LGBTQ+ bias

At EqualAI, we believe that bias can be embedded at each human touchpoint throughout the AI lifecycle. But each touchpoint is also an opportunity to identify and remove that form of bias.

In addition to the framing, design, and development of the AI system, a significant source of bias with respect to LGBTQ+ individuals is in the historical human biases in the datasets used to train the algorithms. Furthermore, given the lack of LGBTQ+ labeling in many datasets, either out of respect for privacy, legal compliance, or oversight, it can be nearly impossible to study if an application is biased against an individual or community based on their LGBTQ+ status.

Bias can emerge in AI through incomplete or unrepresentative training data. Predictions from models can also be systematically worse for under-represented groups, which can be another source of AI discrimination that disproportionately impacts LGBTQ+ individuals.


3. Align with LGBTQ+ ally and advocacy groups

For all of these reasons, when building or implementing AI technologies, it’s crucial to seek out multidisciplinary stakeholders that can provide perspectives of potentially under-represented communities, such as LGBTQ+. There are a number of organizations that do work at the intersection of AI and LGBTQ+ advocacy, including:

  • The Algorithmic Justice League, which offers workshops and development standards for inclusive AI 
  • The American Civil Liberties Union (ACLU), which tackles the pervasive challenges in facial recognition technology and other AI-supported mechanisms that have biased inputs or applications resulting in discriminatory outcomes
  • The Human Rights Campaign, which involves developers in AI planning, building, and testing to ensure that the identifiable instances of harmful bias are eliminated from AI applications.

We have work to do, but the opportunity is vast

AI is a reflection of our society and human language. As such, we should expect historical biases to emerge in its development and application, and have a plan to identify and mitigate such risks. However, AI also presents an opportunity. We can emerge beyond the harms and inequity of our past, including our treatment of LGBTQ+ people, if we are intentional and make this a priority in how we design, build, and deploy our AI systems.


Check out Miriam Vogel on The Changemaker podcast with will.i.am!