Beyond Logic: How AI Models “Judge” and “Trust” Humans

8

Artificial intelligence is no longer just a tool for retrieving facts; it has become a silent arbiter in critical life decisions. From determining loan eligibility and hiring candidates to providing medical guidance, AI models are increasingly integrated into the workflows that shape our social and economic realities.

A new study published in the Proceedings of the Royal Society A reveals a profound truth about this integration: AI systems do not merely process data—they form systematic “judgments” about the people they interact with.

The Mechanics of Digital Trust

To understand how these models operate, researchers compared 43,000 simulated decisions made by advanced AI models, such as OpenAI’s ChatGPT and Google’s Gemini, against roughly 1,000 decisions made by humans. The tasks involved common social evaluations, such as deciding how much to lend a small business owner, whether to trust a babysitter, or how to rate a supervisor.

The findings suggest that AI models do indeed grasp the fundamental pillars of human trust:
Competence: The perceived ability to perform a task.
Integrity: The perceived honesty of an individual.
Benevolence: The perceived kindness or good intentions of a person.

However, while the criteria for judgment may look similar, the method of reaching those conclusions differs fundamentally between biological and artificial intelligence.

Holistic Intuition vs. Spreadsheet Logic

The core distinction lies in how the decision-making process is structured. Humans tend to use a holistic approach. When we meet someone, we blend various traits into a single, intuitive, and often “messy” impression. We see a person as a complete entity.

In contrast, AI operates through rigid, systematic decomposition. Instead of a holistic impression, the models appear to break individuals down into discrete scores—much like columns in a spreadsheet—for competence, integrity, and kindness.

“People in our study are messy and holistic in how they judge others. AI is cleaner, more systematic and that can lead to very different outcomes,” explains Valeria Lerman, one of the study’s authors.

This “cleaner” approach is not necessarily a benefit. Because AI judges through rigid categorization, its reasoning lacks the nuance of human social intelligence, making its underlying biases much harder to detect and correct.

The Risk of Amplified and Systematic Bias

One of the most concerning revelations of the study is that AI does not just mirror human bias; it can amplify and systematize it.

While humans are certainly prone to prejudice, their biases are often inconsistent or situational. AI biases, however, tend to be more predictable and pervasive. For instance, in financial simulations, the study noted significant discrepancies based on demographic traits, such as older individuals consistently receiving more favorable outcomes.

Furthermore, the study highlights two critical risks for the future of AI integration:
1. Lack of Uniformity: There is no single “AI opinion.” Two different models might appear identical in their conversational abilities but behave in wildly different ways when making life-altering decisions.
2. Predictable Discrimination: Because AI follows mathematical patterns, its biases can become hard-coded into the very logic of the system, leading to widespread, automated inequality.

Conclusion

The integration of AI into society has moved past the question of whether these tools are useful; the real challenge is understanding their internal “moral” architecture. As these models increasingly act as gatekeepers to opportunity, we must recognize that they do not see the world—or us—through a human lens.