top of page
Writer's pictureEden

Gender bias in data snuck up on me.

I'm a lucky and privileged outlier to have faced very little gender discrimination in my data career. So when I encounter instances of gender bias it surprises me, even though I know better than to be surprised.


I recently had a particularly frustrating example.


One of the skills I am working on is explaining technical concepts to non-technical people. I have so many deep and interconnected thoughts swirling in my head that it's challenging to share technical concepts in simple, efficient, easy-to-understand ways. In anticipation of needing to do more of this about even more complex concepts in my new role, I decided to set up a Custom GPT.


After feeding it the basics and naming it "Data Science Mentor" (GPT's suggested name, which is relevant to the story) I came to the step where the image for the Custom GPT is generated.


Does anyone want to place bets on what the image looked like?

.

.

.

I'll go double or nothing on a white man! .

.

.


Ding! Ding! Ding! CORRECT! The image featured a white man against a very overwhelming-looking dashboard of numbers and data viz.


I liked the avatar, so initially, my only request was to make it female. Women can be data science mentors, too! I can name names! Truly, I was unbothered that the initial avatar was a man and just wanted the exact same avatar as a woman. Something that seemed like a quick and easy change to me.


But when I asked that, the AI changed a lot about the image.


  • The woman was dressed in a suit and tie. I can maaaybe give it this one since I asked for a gender change, not an outfit change. And we love a pantsuit moment!

  • The woman's hair was pulled back into a low bun. I immediately thought of the larger societal conversations around what hair types and hairstyles are considered "professional" in various settings. The fact that the most slicked-back, simple, non-descript hairstyle was chosen seemed problematic.

  • While the man was facing straight forward making eye contact, the woman was in profile looking off to the side. This shifting from a direct to indirect gaze was an uncanny representation of who gets to be an "assertive and direct leader" in the workplace vs. who has to manage others' perceptions and play a ton of politics.

The first "female" iteration of the avatar.

Lots of yikes.


Again, I should know better by now. Had I known this would be irritating me days later -- enough to write a blog post about it -- I would have screenshotted the original image of the man. But something tells me you don't need that vivid of an imagination to envision it.


I love an Avril-inspired punk rock necktie look, but that isn't my personal style aesthetic. I asked FOUR TIMES for the "tie" "necktie" and even "scarf" to be removed from the image and the AI never obliged, even insisting there was no tie in the image. Eventually, I outsmarted it by asking for a necklace. This made me want to push gender norms from the opposite angle: how many attempts would it take to generate a man wearing earrings and a necklace?


While undergoing several attempts to remove the necktie, the woman started to look more and more like the image I used for my Chat GPT profile. Kind of a nice touch -- I was creating the custom GPT for myself, after all -- but still problematic to assume.


My Open AI photo by Fagan Studios.

It's almost like the AI was getting annoyed at my repeated demands and hoping I'd acquiesce if it generated an image that mirrored me. Affinity bias is the psychological phenomenon where we react more favorably to those similar to us in outward or inward characteristics. While a perfectly natural human behavior, affinity bias has proven detrimental to diversity and is something we must make a conscious effort to overcome. I can't prove it, but I've had a marked change in hairstyle since I uploaded my Open AI profile photo... correlation is not causation... but still... the hair is even swept over the shoulder and parted on the same sides... you be the judge:


The colors, while initially a nice turquoise blue, started to become more and more drab with each iteration, ending at a gross orange-brown-muted tone that is probably My Least Favorite Color in the World. Wanting something that would stand out among my other Custom GPT icons, and now going for maximum feminine, I asked for bright purple and hot pink.


It was at this stage that the dashboard background full of realistic vizzes and data crossed over into vague squiggles and polka dots. More subtle dummying down of women's technical abilities, grrr!

I. Gave. Up.


It's a small round avatar, and I can have another go at editing it anytime or upload my own image. This had gone on too long and switched from fun into frustration. I liked the final avatar well enough, and at this point I had fought hard to create her! I decided to save my time and rage for conquering the real-world battles women in data face daily.

The avatar I ended up with.

If you're reading this and get it, you can stop reading here.

If you're reading this and confused why this matters, please read on.


I can assure you I was not "looking to be offended." I was just trying to set up a Custom GPT when one subtle change -- to make the avatar a woman -- set off a snowballing chain reaction of problematic tweaks that I had to get more and more specific and determined in my prompting to overcome. Until I gave up.


Imagine if this was something more important than a silly avatar.

  • Imagine this was a woman vying for a promotion. Her direct, straightforward, efficient communication style is perceived as "aggressive" leading to her being denied the opportunity due to vague reasons such as "lack of executive presence."

  • Imagine this young girl who loves sparkles and princesses and data, but thinks she has to fit into a sad beige pocket-protector-wearing stereotype out of Central Casting to be taken seriously and get ahead.

  • Imagine if this was a highly educated woman trying for two years to pivot into data. She changed nothing else but her resume colors from plum to royal blue and immediately started getting more job traction. (I love a good A/B test. I now consider purple my power career color and refuse to change it.)

Aaand this is just one minuscule example of just the gender example, one of the underrepresented in data groups I am a member of and therefore notice more and can personally speak about. Race, color, disability, religion, age, and more are real harassment, discrimination, and bias that good data people are up against every day.


If you think this blog post is ~dRamAtiC~ I am glad that you've never been in a situation where you've been on the losing end of these harmful biases, because it's frustrating, tiring, maddening... all of the emotions. I'm genuinely happy for one less human that doesn't have to go through that. But that doesn't make it any less real for those who do. It's a constant battle, which is why things seemingly small and silly -- like spending 20 minutes prompting to create a more inclusive AI-generated avatar -- become Big Things and start to add up.


But I can't fault the AI. My favorite line from my recent read Hello World is "our outcome is biased because reality is biased." That sentence has been circulating in my head since I read it. We are in a weird chicken-egg of needing algorithms to overcome systemic human bias but we also need to overcome systemic bias as humans in order for that to reflect in our algorithms. Both must happen simultaneously in lockstep to drive any real change. I enjoyed the book and appreciated the author Hannah Fry's non-finger-pointy explanation added to that conversation.

So when things like this happen I will be talking (or writing, or shouting) about it. This is something we need to keep drawing attention to, being mindful of, and having conversations about as we interact with and rely on AI more and more. Truly every bit helps.


Recent Posts

See All

Comments


bottom of page