How do Artificial Intelligence (AI) large language models structure the human world? And how does that compare to what humans do? Cognitive psychologists have long studied how humans represent the social world in our mind--for example, by relying on social category labels, such as race and gender, to make quick predictions about individual personal traits. Such mental shortcuts reflect inaccurate perceptions of the world and often lead to social prejudice and discrimination. While human rationality is bounded by limited mental resources, highly capable large language models, such as OpenAI’s Generative Pre-trained Transformer 4 (GPT-4), can process vast amounts of information quickly and efficiently, showcasing a level of cognitive capacity that can potentially surpass the constraints of human rationality. This project aims to apply the cognitive framework of psychological essentialism to investigate whether GPT-4 exhibits social essentialist bias similar to humans, as a way to explore the underpinning of AI social bias and identify areas where large language models mirror or rise above human irrationality. We will use a meta-prompt to instruct GPT-4 to evaluate a range of social categories along 6 essentialism items on a 9-point Likert Scale, with higher ratings indicating stronger essentialist bias. Through OpenAI's API implementation, we will generate 150 responses in GPT-4 (sample size comparable to previous data collected with human subjects) in the Python language. We will calculate mean essentialist ratings and compare them with the scale mid-point (5.0) using one-sample t-test to examine whether GPT-4 displays an overall essentialist bias. We will also conduct generalized linear mixed-effects models to examine whether GPT-4's responses differentiate from human responses and vary as a function of social domains (i.e., race, gender, nationality, etc.). Potential findings from this project will inform the responsible development and deployment of AI technologies in human-AI collaboration, particularly in decision-making processes typically prone to social biases.

This document is currently not available here.