In the latest twist of AI evolution, we have moved past the phase of simple errors and into something much more personal and problematic. New research suggests that AI chatbots are not just processing your prompts. Instead, they are forming psychological profiles and judging you in ways that could influence everything from customer service to financial approvals.

A recent study published by the Hebrew University of Jerusalem (via Tech Xplore) reveals the hidden logic behind how large language models evaluate human users. While we often view these bots as neutral tools, the research indicates they are trained to assign traits like competence, integrity, and benevolence to users.

The mechanics of AI judgment

The core of the issue lies in how AI models interpret certain signals. The study found that while humans make holistic judgments, AI breaks down people into components, scoring personality traits like separate columns in a spreadsheet. This leads to a rigid, by-the-book style of judgment that lacks human nuance.

Even more concerning is how these models decide who to trust. In simulations involving lending money or hiring babysitters, the AI did not just look at the facts. It formed a version of trust that favored those who appeared well-intentioned, but it did so through a mechanical lens.

Amplified bias and real-world stakes

The study further highlights that these judgments are not applied equally. Researchers found significant biases where the AI decisions shifted based on demographic traits like age, religion, and gender. These differences appeared even when every other detail about the person was identical. In financial scenarios, these biases were often more systematic and stronger than those found in human participants.

What’s more unsettling is that there’s no single AI opinion. The researchers found that different models made wildly different judgments about the same person, effectively operating with different moral compasses.

Why this matters

This judgment could lead to a new form of digital anxiety. We are entering a time where you might need to act a certain way for the AI just to get the best results. Because different models can reward or penalize the same trait, the specific AI system a company chooses could quietly decide your creditworthiness or your next job.

As we move toward a more automated world, the AI industry needs more than just better code. We need to see these hidden judgments before a digital assistant accidentally hurts your reputation or your bank account based on who it thinks you are. The goal of AI should be to make life easier, not to add a layer of profiling that users did not ask for.



Source link

By HS

Leave a Reply

Your email address will not be published. Required fields are marked *