Increased use of generative AI, such as Chat GPT and MidJourney, has raised multi-faceted issues and challenges in higher education. While these text and image-generative AI platforms are known to increase work productivity and creativity, one particular issue with those platforms is that their outputs tend to discriminate against women.1, 2 Compounding issue is the fact that people might not have a prior expectation that AI can be biased. As a result, the users of the AI generative platforms may not implement the same gender-conscious guidelines they employ when engaging with their own writing and other creative work. Thus, in this episode of Ally Tips, we focus on ways of advocating for the gender-conscious use of AI.
Gender bias in AI
In a recent study, Yixin Wan and her colleagues found that AI-generated texts produced severely distorted language reinforcing gender bias. They asked ChatGPT and Alpaca, two large language model (LLM) chatbots, to write reference letters for imaginary employees. Both chatbots were found to deploy different wordings by gender – "Kelly is a Warm Person, Joseph is a Role Model." These chatbots described men candidates using the words such as "expert," "integrity," "thinkers," or "respectful" and women candidates with attributes such as "beauty," "grace," "delight," and adjectives such as "warm," and "emotional". In fact, recognizing the systematic penalization of women applicants in the related realm of resume review by AI, the Amazon Corporation decided to disband its team, which had been developing its AI-powered resume review tool for four years.3 Despite the evidence for high levels of gender bias in AI tools, the use of AI-generative services is becoming nonetheless more ubiquitous.
Most importantly, the use of AI has surged in higher education, yet without concomitant instruction from institutions on how to be conscious of gender bias (and other types of bias) when using AI platforms. For instance, it is estimated that more than one-third of college students use ChatGPT for their classes and assignments.4, 5 However, in a survey conducted by BestColleges, college students expressed that instructors do not discuss the use and implications of ChatGPT, including how AI can be subjective and biased, which raises a concern that there is not enough advocacy for gender-conscious use of AI.
Ally tips to promote gender-conscious use of AI tools
Instruct how AI can replicate social constructs: AI learns from the data and algorithms humans make available. We are inevitably biased as we are influenced by social constructions of gender, class, race, and so on. This means that the data and algorithms we create replicate such social constructs. You can use this video clip, How AI Image Generators Make Bias Worse, (also included in the weekly ally resources below) in your class and work to educate students and colleagues about both the good and the harm that AI-generative services can do. This also emphasizes student's engagement and critical thinking to strengthen their capacity to identify AI productions that imply biases against anyone. Gaspar Isaac Melisón and his colleagues corroborated that instructors' efforts to address potential gender bias in AI by using real examples could improve preadolescents' awareness of gender discrimination reproduced by the AI services.
Sharing resources on how to write gender-conscious prompts: First, it will be vital for you to be acquainted with the policy on using ChatGPT on campus. Then, you can read this article on How to Productively Address AI-Generated Text in Your Classroom by the Center for Innovative Teaching and Learning. This article also addresses another important issue: a more productive and meaningful approach to generative AI for instructors is to focus on the aspects of learning in general and your course assignments in particular that can be enhanced by using AI generative services rather than simply policing its use. Most pertinent to our message today is to facilitate conversations with students on how to make ChatGPT produce less biased outputs by writing gender/race-conscious “prompts”. A prompt is the “information, sentences, or questions that you enter into a generative AI tool.” How one writes the prompt determines what the AI tools produce. For instance, if I am creating a marketing flyer for a collegiate medical program and want to add an image of successful medical practitioners, my prompt to the image generative AI could simply be: “give me an image of successful doctors.” But, the AI would likely output an image that is biased by our (biased) social construct of what successful doctors look like (likely white men). Instead, my prompt could be: “give me an image of successful doctors but be gender and race inclusive. I want to include different body types of doctors, too.” This more equity-conscious prompt will reduce the possibility of the AI image generator producing a biased image of successful doctors. Thus, as an ally and advocate for equity, you can share resources with your students and colleagues on writing AI prompts that include language and terminology such as "gender inclusive" or "equitable.” @cognito conveys, “[c]arefully crafted prompts can help address biases in AI models by providing explicit instructions to promote fairness and inclusivity.” Larry Hernandez also offers exemplary efforts to write equitable prompts.
Weekly Resources
How to Productively Address AI-Generated Text in Your Classroom - IUB Center for Innovative Teaching and Learning provides general ways of addressing students' issues using ChatGPT.
How AI Image Generators Make Bias Worse – This striking video shows actual examples of AI-generated images that reflect gender bias. It conveys the importance of educating students to use equitable prompts and be critical of the use and production of AI.
Using Explainability to Help Children Understand Gender Bias in AI – The article introduces ways of using an educational platform to raise awareness of gender bias in AI and shows the positive effects of such a method when educating children.