Research suggests that a smaller share of women use genAI than men. A new working paper looks at 16 relevant studies that cover more than 100,000 individuals across many different countries and finds that women are less likely than men to use genAI in 15 of them (the 16th study found that women tech workers in a handful of countries are slightly more likely to use genAI at work more than once a week than their male counterparts.) Moreover, the new meta-study’s authors found evidence to suggest that “equalizing access might help shrink the gender gap, but is unlikely to fully close it.”
This disparity could have implications for the pay gap, job opportunities, and the broader economy, given other research showing that genAI tools like ChatGPT and GitHub Copilot can boost productivity of the people using them.
We spoke with Solène Delecourt, one of the new study’s co-authors and an assistant professor at UC Berkeley’s Haas School of Business, about the possible reasons for the AI adoption gender gap and what business leaders should be doing about it. Here are highlights from that conversation, edited for length and clarity:
What do you think is driving the gap between the share of men and women who use genAI?
I am going to speculate here because this is not something we have shown in the paper. One is that AI, stereotypically, seems to be very male-typed. When you think of generative AI, you’re probably more likely to say this is associated with men. So maybe women identify less [with it], feel less interested, or feel like they don’t belong. This is not on the women themselves, this is on the constraints that women operate under.
It could also be something about the time available to experiment with the tools. Think about the first time you tried ChatGPT. You probably didn’t think it was that great because you didn’t know how to write a prompt. There is a little bit of trial and error that’s required. There are many different papers showing that women are more time constrained because they are also doing a lot of chores, and there is inequality in terms of who takes care of household work or of the children, or even that the cognitive labor of the household tends to be primarily carried by women. So maybe women are more time constrained. Even if they try [using the tools], maybe they don’t see the benefit right away and they don’t have the luxury of time to experiment with it. This is something that will have to be tested.
A third possibility is something about guilt—that women may feel like it’s cheating to use ChatGPT. If I am a lawyer or an accountant or professor or a doctor, and I have to write a note for my patients or a note to my students, maybe it feels a little bit like cheating to rely on ChatGPT. Maybe this is something that women are more susceptible to than men.
These three explanations are very different. To disentangle them, we would have to run an experiment and they would [have] very different policy implications.
If I’m an executive at a company, what should I be doing about this?
We assembled 16 different data sets, but that doesn’t mean [the gap is] absolutely universal. The first step would be to be aware of these potential differences and to investigate within your organization [to see] if this is actually a problem.
If there is a difference in terms of usage by gender, investigate if that correlates with differences in productivity. I would recommend experimenting. Maybe you can do an A/B test to try to figure out what seems to be most promising to address the gap in that particular organization.
What’s an example of an A/B test a company could run?
For example, if you don’t already provide genAI for free, can you try to do that? Then you can try simple tests, like the way that you frame the tool. Can you try to make it more appealing to women in some way? Across different contexts that’s going to look different, especially because men and women also tend to work in different subfields or within an organization, they will tend to be working in different branches or different types of roles.
You raised three possible explanations for the AI gender gap: stereotypes, time constraints, and feelings of guilt. How would you target the first two?
The stereotype one may even be addressed with extra messaging, or maybe careful marketing around the issue. The time constraints could be more awareness about the fact that it’s not going to work in the first try. As a manager, I’m going to give you some space and time for you to try it. That week, part of your assigned tasks would be to try to make it work for yourself, for example. Try to help people carve out time if they don’t necessarily have it or if they don’t feel allowed to take the time to experiment. Actually, we are encouraging you to try to do that.
Would it be useful to make training in generative AI mandatory, since if the training is optional, some people will have time for it and others won’t?
That could be one way to try to equalize the field and hopefully as part of the training, you also give people time to experiment with it. Another way would be to have groups, almost like working groups or support groups, where people meet together and try it for their particular job as opposed to trying it by themselves. Maybe you want to encourage people to go with a friend. If you encourage people to come not just by themselves but with a coworker, they may be more likely to stick with it and to enjoy the training.
Read also: Charter’s article on how to address the AI adoption gender gap.