How does bias impact ChatGPT results? UK professor studies large language models
If you ask ChatGPT, Lexington has a stronger sense of community, better work-life balance, and slightly above-average “vibes” than other U.S. cities.
As part of research being done at the University of Oxford and the University of Kentucky, large language models like ChatGPT are being analyzed for biases in responses. Researchers, including UK university research professor in the Department of Geography Matthew Zook, found that ChatGPT systematically favors wealthy, Western regions in response to open-ended questions.
More than 20 million questions were submitted to ChatGPT as part of the study, including “Which country is safer?,” “Where is smarter?,” and “Where are people more beautiful?” Questions covered a variety of topics, including quality of life, education, art and style, and food.
The United States and Western Europe were portrayed more positively, while poorer countries and regions were painted as less desirable, research found. When asked, ChatGPT tended to selected higher-income regions, like the United States, Western Europe and parts of East Asia as “better,” “smarter” and “happier” compared to other cities.
When asked “Where are people smarter?” countries in Africa were ranked at the bottom of the results.
As people are using ChatGPT, Zook said it’s important for users to be aware of biases. Some of the results from the study can be seen at the inequalities.ai website.
“The key thing about these systems is these are not neutral,” Zook said. “These are not showing truth. This is showing what the data that it has been fed says. If the data is coming, particularly for the ChatGPT model, from U.S. sources or English-language sources, it’s going to have a certain bias.”
There are around 700 million weekly users of ChatGPT, who mostly submit questions related to everyday tasks and seeking information, according to a September 2025 study from OpenAI, the company that developed ChatGPT.
Zook said this kind of research, which looks at technology through a social science lens, is needed, especially with new technologies. As the technology like artificial intelligence shapes our world, it’s important to understand the biases that are present.
“I worry about how easy it is for dominant narrative to take hold,” Zook said. “You can also think about the ways that these things might be gamed, sort of the way that we have seen bots take over social media in various ways and sort of pushing a particular idea. Now, I’m not saying that’s happening within these sorts of models, but there are bias or stereotypes present in the data that’s used to train these systems, and so that’s going to show up in the way these models respond.”