Advertisment
HomeLifestyleTechnologyBuilt For All?: Artificial Intelligence And The LGBTQ+ Community

Built For All?: Artificial Intelligence And The LGBTQ+ Community

By Aaron Spitler, Global Voices

With each passing year, artificial intelligence (AI) appears to be increasingly embedded within our daily lives. Around the world, people are becoming more familiar with the ins and outs of this innovation. They are also beginning to see its potential advantages. Recently, a global survey conducted by market research firm Ipsos revealed that 55 percent of respondents felt AI-powered solutions offer more benefits than drawbacks. Results like these make it clear that, despite the anxiety surrounding this technology, the public remains intrigued by what it can do. Companies have taken note of this sentiment, selling their products by emphasizing their efficiency and usability. Given how private investment in AI has soared over the past decade, the evidence suggests that consumers have bought into this pitch.

However, not everyone is convinced. Case in point: members of the lesbian, gay, bisexual, transgender, and queer+ (LGBTQ+) community have paid more attention to the downsides associated with AI. Many problems can be traced back to the data used to train models, which is often rife with stereotypes and misconceptions about LGBTQ+ people. Yet AI’s “offline” impacts can be equally alarming. The technology’s incorporation into systems specifically designed to identify and surveil community members, for instance, is also top of mind. From development to deployment, these issues illustrate how AI-enhanced tools are frequently more harmful than helpful to the LGBTQ+ community. Without proper guardrails on this technology’s use in place, many may find that it is considerably more hazardous than it is worth.

Digitizing established stereotypes

To understand how AI can adversely affect LGBTQ+ individuals, it is important to start with the data that is fed to models. “Wired” highlighted that, when prompted to depict members of this community, popular image generation tools produced reductive outputs. As an example, Midjourney routinely presented lesbian women as stern figures with numerous tattoos. Data scraped from the internet may be to blame for these oversimplified (and offensive) representations of queer life. Much of the information available to models about the LGBTQ+ community is influenced by stereotypes. As a result, solutions like Midjourney are exceedingly likely to reproduce these biases in their images. Although workarounds, such as improved data labeling, can boost model accuracy, they may be insufficient because of the sheer amount of derogatory content found online.

Flawed portrayals of the LGBTQ+ community by AI models are not an isolated problem. In fact, many of the AI tools dominating the market generate outputs that are skewed against this group. In a report assessing the guiding assumptions which define large language models (LLMs), the United Nations Educational, Scientific and Cultural Organization (UNESCO) identified how widely used tools like Meta’s Llama 2 and OpenAI’s GPT-2 are markedly shaped by heteronormative attitudes. According to their research, these LLMs created negative content about gay people more than half of the time in their simulations. UNESCO’s findings not only underscore the pervasive homophobia in training data consumed by prominent generative AI solutions; they also show major developers’ inability to effectively address this far-reaching issue.

Enhancing public surveillance

The damage AI may inflict upon LGBTQ+ individuals is not limited to the digital space. AI-powered systems that allegedly detect the gender of those in public spaces have garnered real attention. Forbidden Colours, a Belgian non-profit that defends LGBTQ+ rights, outlined the troubling implications of AI tools for “automatic gender recognition” (AGR). AGR solutions analyze audio-visual content, like security camera footage, to draw conclusions about a person’s gender, citing elements like their facial features and vocal patterns. These “cutting-edge” systems are inherently problematic. As the organization states, it is impossible to detect how a person understands their gender by exclusively studying how they look or speak. In this regard, building solutions that classify individuals using these arbitrary characteristics is misguided at best and dangerous at worst.

Despite these glaring deficiencies, AGR systems have their vocal proponents. In particular, governments that are expressly antagonistic to the LGBTQ+ community have adopted these tools, with many rationalizing their decisions in the name of public safety. For example, “Politico Europe” reported how Hungarian Prime Minister Viktor Orbán sanctioned the use of AI-enabled biometric monitoring at local Pride events. The far-right politician claimed that such measures would shield children from the “LGBTQ+ agenda.” In reality, the move enables the government and its allies in law enforcement to surveil artists, activists, and average citizens at these gatherings. Although this policy is being reviewed by institutions within the European Union, its implementation serves as a stark reminder of how AI can be used to intimidate LGBTQ+ leaders who are mobilizing for change.

Changing the equation

For members of the LGBTQ+ community, the trade-offs related to AI are steep. While this innovative technology might be a net positive for the larger population, it presents specific challenges that may disproportionately impact queer users. Common tools, such as image and text generators, have been found to recirculate damaging tropes about LGBTQ+ life that are difficult to completely eliminate. Outside the digital realm, AI’s deployment in offline spaces also poses significant risks. Its incorporation into surveillance systems, oftentimes with the explicit goal of labeling the genders of those caught in the dragnet, stands as an affront to individual privacy. Taken together, these examples demonstrate how many of the AI solutions which have reshaped our day-to-day experiences were not designed with all people in mind.

Leaders across sectors must take action to reverse this trend. It starts with forging partnerships between developers and stakeholders in the LGBTQ+ community. Constructive collaboration can help ensure that the training data used by AI models more accurately reflects the lived realities of queer folk. It should also include robust safeguards to prevent the misuse of AI for surveillance against the community. Systems equipped with gender detection capabilities must be strictly prohibited, as they undermine an individual’s right to privacy. Critically, input from LGBTQ+ individuals should be solicited at all stages of the tool development lifecycle. This cooperation would not only mitigate the myriad harms presented by AI, but also increase the likelihood that members of this community will begin to see the technology as a value add.

You may also be interested in

Read the latest edition

- Advertisement -
- Advertisement -

More by this author

Why Simple Daily Habits Matter More Than Big Health Resolutions

Most people don’t set out to ignore their health. It usually slips down the list somewhere between the morning alarm and the last email...

5 Trends in Heart Health Among Younger Adults: Why Your CoQ10 Level Matters

By Doctors Best Heart disease is something many adults push to the back of their minds if they are not experiencing symptoms; a concern for...

Ban the Box on Basic Human Needs: Food Security for People with Probation Violations

By Diana Martine, Chicks Ahoy Farm, Inc. Chicks Ahoy Farm Inc is a community-based organization working toward systemic change, from local towns and cities to...