AI Pride computer - AI Generated Image

Living Outside the Binary: How Queer Communities Can Leverage AI

June 4, 2024

In 2017, a Stanford study claimed that artificial intelligence (AI) could determine sexual orientation solely from facial images, which was swiftly criticized by LGBTQ+ advocacy groups as part of a larger conversation around data privacy and ethical uses of AI. While an idealistic goal of AI may be to eradicate human biases and uplift marginalized communities in theory, the data training the AI is often biased. But AI tools could be a potential boon to LGBTQ+ health, education and advocacy if understood and used correctly—AI can serve as a tool for counter bias and equity for the queer community.

Before we discuss the ways AI can be used to uplift the community rather than enforce biases, we need to understand how far-reaching these biases can be. Recently, a social media platform has come under fire for alleged “anti-LGBTQ+” activities perpetuated through its algorithms, including limiting exposure to LGBTQ+ hashtags and LGBTQ+ creators’ content and instead promoting homophobic content to its users—the majority of whom are between 13-20 years old and are very susceptible to suggestible or hateful content. Not all AI tools are algorithmic, but many do learn from user interaction and data collection and react to newly learned or experienced information to better inform its responses and outcomes.

A large part of the bias in AI systems may also come from being trained on historical data. The goal of an AI developer is to teach a system how to respond by creating “norms” or “staple decisions” and, by design, pushes outcomes without robust examples to the margins. When communities are pushed to the margins, they’re seen as “other” or “incorrect” based on the set standards—in this case through historical data. Because of the structural biases persisting in the U.S. health care or education systems, for example, AI takes on those qualities because of the data it digests.

In the 1980s AIDS crisis, health care outcomes for queer people were devastating because there was little data about the disease and the government was not quick to act on researching it. If we were to train AI tools on data from that time, it may identify outdated treatments that contradict current medical research. The issue here is that this biased data may be used to train modern AI systems and is continuing to reinforce the pre-existing biases in health care, education and other fields.

AI’s depth and breadth of knowledge is determined by the data that is input into the system, and the LGBTQ+ perspective has long been excluded from algorithmic fairness research. This is, in part, because queer identities are seen as “sensitive information,” but also because queerness is seen as a fluid cultural construct that cannot be described using the 0 and 1 binary that AI operates on. These are flimsy arguments at best and are at the crux of the issue—LGBTQ+ perspectives are not being recorded when training or studying AI and its affects.

Reclaiming AI Tools for Good

We can see a great example of an organization using AI to supplement existing systems with the Trevor Project, an LGBTQ+ suicide prevention non-profit in the United States, which developed a “Crisis Contact Simulator” to help train their counselors on how to respond to different situations and support youth in crisis. This is a perfect example of how intentional and LGBTQ+ focused technology can help the community and even save lives.

Another example is AI’s ability to help us anticipate future legislation and policy. Predictive analytics can help lobbying and advocacy groups anticipate and prepare for policy changes that may affect the LGBTQ+ community, ensuring they are always a step ahead in the advocacy process. This is done by training AI using legislative and regulatory data to predict when legislation might be introduced at a local, state, or national level, and what legislators may introduce such legislation given their voting and sponsorship history.

In April, researchers at the MIT Media Lab released a study called “AI Comes Out of the Closet,” which is a large language model (LLM) that “allows users to experiment with and refine their approach to LGBTQ+ advocacy in a safe and controlled environment.” This simulator could potentially be used for workplace training and fostering more inclusive work environments, deepening understanding and empathy in health care and mental health services and assisting educators in teaching LGBTQ+ issues and empathy.

As we see with the Trevor Project’s use of AI to train their counselors and MIT’s training of an LGBTQ+-centric AI tool, AI is not just about technology; it’s about people.

There are countless issues surrounding AI for the LGBTQ+ community, and so many of them are because the people who create these tools are not considering the impact on marginalized groups. If we’re confident that these AI technologies are here to stay and evolve, we need to stay ahead of the curve in understanding and addressing the issues with these tools to reduce and eradicate bias against the LGBTQ+ community—in turn, use these tools to improve our community building and support for queer people. If we don’t, more of our queer brothers and sisters will be pushed to the periphery and our voices will never break through the AI din.

The image used for the post was AI-generated. 

Related Articles

The Energy Cost of Digital Transformation

Perspectives

The Energy Cost of Digital Transformation 

June 28, 2024
Augmenting AI for Leadership

Perspectives

Augmenting Leadership for the AI-Driven Future 

June 25, 2024
woman writing on a keyboard

Perspectives

Beyond the Algorithm: Writers Are the True Enchanters of AI

June 26, 2024