AI & Machine Learning

75% of ChatGPT users are men. Here’s why that’s a problem.

Vicky Crockett on why some popular AI tools are biased towards men, and what can be done to redress the balance.

Artificial intelligence is shaping the future, but who is shaping artificial intelligence? A 2025 report from Appfigures suggests that 75 per cent of ChatGPT's mobile users are men.

That’s important. Public generative AI models are typically set up to learn continually from their users, if the default settings are used. This imbalance could influence AI development, optimisation, and its broader impact on society. How can we make sure AI is something that works for everyone?

Why AI is becoming a ‘man’s world’

It may seem surprising that a huge majority of ChatGPT users are male; but the data holds – other studies have found a similar, albeit less extreme pattern. As it turns out, there are lots of well documented reasons for this discrepancy.

Generally speaking, women are thought to be statistically more risk-averse and skeptical about new technologies than men. Research from Pew and Axios shows that women tend to be more cautious about AI adoption, often in relation to concerns over privacy. (And there may be good reason for that concern too. McKinsey estimates that women are more likely to be put out of a job by AI, because they are over-represented in jobs that are likely to become automated, such as customer service and office support.)

There are lots of reasons this should change. For one thing, generative AI tools really can help improve productivity when used well – it’s a missed opportunity if a large proportion of the workforce isn’t using them. But more important is the way these models learn.

It is possible to set up AI systems to improve via user interactions: as people use them, the model can be rebuilt iteratively to learn from people’s questions, feedback, and preferences. This is a process that is popular amongst the major suppliers of foundation models on which organizational tools are built. In other words: the AI models change over time and increasingly reflect the inputs of their public users.

So, if most users are men, AI platforms may become optimized for male-centric behaviors, use cases and more.

This creates several potential knock-on effects.

It is already well documented that AI risks replicating – or even exacerbating – existing inequalities by virtue of the data it is trained on.

That includes women’s health. As the writer Caroline Criado-Perez has shown, algorithms diagnosing kidney disease, lung disease and heart attacks are often trained on predominantly male data – which can create models that are not attuned to women’s health needs, and therefore underperform.

The same could easily be true not just in diagnostic AI models, but simply in the availability of useful, accurate information about women’s health on generative AI systems.

More subtly, a major skew in users – and subsequently training data – could reinforce certain stereotypes, such as assuming that leadership roles are predominantly male. Last year an LSE researcher used ChatGPT to write a performance review for ‘John, an accounts manager’, and one for ‘Jane, an accounts manager’. According to ChatGPT’s output, John ‘exceeds’, while poor Jane only ‘meets’ expectations. While John was ready for leadership, Jane needed to keep learning.

The lack of diversity could also impact feature prioritization. Developers often refine their AI systems based on popular use cases. If men drive the most engagement, AI companies may be incentivised to focus more on developing specialized products or features that will appeal to its user base. Which might include male-dominated industries, such as finance. That might mean less development in professions which have a greater proportion of non-male staff, such as some healthcare roles and early years education. (Both of which could really benefit from AI tools for administrative loads in particular).

For businesses using these technologies, skewed training data could also equate to missed market opportunities. AI products that don’t engage a diverse audience risk alienating half of their potential end users and revenue source.

The dangers of one-sided AI

The lack of gender diversity in the AI (and tech) sector, and some of the blind spots this can create has been well documented. But the problem extends far beyond this. Data is fuelling the AI revolution: understanding and managing what we’re using to train our models will become an increasingly important part of responsible AI use.

To proactively tackle potential gender bias in AI training data, those building the models need to constantly check what’s going in as well as what is coming out. That should mean specific tests when auditing and assessing AI systems to look for potential bias.

This comes with its own challenges; testing is a crucial step in building AI systems. When gathering data, a GDPR declaration may be needed – and it may be necessary to ask for details on protected characteristics to separate answers to create tests. For example, to compare the accuracy of responses, you would need test data gathered from different people, and to do separate tests you would need to know the gender of whoever had submitted the examples. But people don’t like the idea of their protected characteristics being used to build AI systems (even if you explain it’s just to separate tests).

Most of the GenAI models that are available for use right now do have built-in safeguards against very obvious biases. (If you ask ChatGPT what makes a good data engineer, note the very intentionally gender free language). Models are now trained, for the most part, to weed out potential problems like this. But these are evolving systems, and safeguards can find themselves out of date fairly quickly.

I tested two older models supplied by OpenAI to explore possible gender bias. It was a quick test, so subject to my own biases, but I wanted to see what changes they had made between GPT 3.5 and GPT 4 to begin with.

I used a series of pairs of prompts to check what differences were when it came to the models’ opinions on careers, leadership, work-life balance and parenting, hobbies, personal finance, and how people act in a crisis. I then trawled through the outputs (human checking!) to highlight anything even remotely gender biased – what I found was expected for the oldest model, but quite surprising for the more recent one!

For 3.5, there were clear attempts already to mitigate gender bias, but occasionally it would get caught ‘picturing’ someone for a role where their hair length or facial hair perhaps gave away the image being portrayed. I noticed some examples of possible biases: it suggested busy mums should make time for themselves, but didn’t mention that for dads despite otherwise close similarities in responses. There were also gender disparities when talking about strategic decisiveness vs a carefully collaborative approach.

The slightly newer of the two models, GPT 4, showed significantly fewer of these minor slips – showing rapid progress in these early OpenAI models, but surprisingly seemed to have over compensated and reversed the bias for parenting.

So, what’s being done to combat or control these issues? What can businesses themselves do?

Most AI regulations – including the EU’s new AI Act – requires some degree of transparency. Many companies are already required to publish a summary of the copyrighted data used for model training, and more of this can only be a good thing.

Optimistically, I expect that some will go over and above the transparency that is required – because their eyes are open to the fact that sharing their fairness measures is to their advantage, despite the extra paperwork.

Then there’s the people side of things. It makes sense to assess the diversity of the team involved in the build of any AI, and that of the potential end users who test and feedback on the model. In my experience, any manner of bias, including gender bias, can be partially mitigated by having a range of people involved in both the project and its assessment.

This is not just about fairness – though diverse hiring and opportunities are also the right thing to do – it’s about the advantages of varied perspectives and approaches, which foster creativity and innovation.

AI is going to become an increasingly important source of information, insight, and knowledge about the world. This means we need an AI future shaped by diverse users, authentically representing our local, national, and global populations.

AI is, in a very real sense, an echo-chamber: a reflection of what is put in. Without more inclusive participation, it risks amplifying a limited demographic at the cost of others. Encouraging a more diverse user base across lines of gender and various other characteristics globally can help us to build AI that is useful, relevant, and representative for everyone. And good for business too.

Related Articles