Home Tech AI Models Mimic Human Flaws in Choosing Random Numbers

AI Models Mimic Human Flaws in Choosing Random Numbers

0

0:00

AI models continually astound us, not just in their capabilities, but also in their limitations and underlying reasons. A fascinating new behavior showcases this: they choose random numbers in a manner akin to humans, which is to say, poorly.

But what does that imply? Aren’t humans capable of picking numbers randomly? And how can you discern if someone is successful at it? It’s a long-known shortcoming of human nature: we overanalyze and misinterpret randomness.

Ask someone to predict 100 coin flips and compare it to 100 actual coin flips—it’s almost always possible to differentiate the two because, paradoxically, the real coin flips appear less random. For instance, sequences with six or seven consecutive heads or tails are common in real flips but rare in human predictions.

The same applies when asking someone to pick a number between 0 and 100. People seldom choose 1 or 100. Multiples of 5 and numbers like 66 or 99 are infrequent. Such numbers seem imbued with certain qualities: small, large, distinctive. Hence, people often opt for numbers ending in 7, typically from the middle range.

Numerous psychological studies highlight this predictability. Yet, it remains intriguing when AIs exhibit similar patterns.

Indeed, some inquisitive engineers at Gramener conducted an informal yet captivating experiment by asking several prominent LLM chatbots to select a random number between 0 and 100.

Reader, the results were not random.

Image Credits: Gramener

All three models tested had a “favorite” number that would consistently be their answer in the most deterministic mode, but which prevailed even at higher “temperatures,” a setting that increases result variability.

OpenAI’s GPT-3.5 Turbo favored 47. Previously, it preferred 42—a number famously known from Douglas Adams’ The Hitchhiker’s Guide to the Galaxy as the answer to life, the universe, and everything.

Anthropic’s Claude 3 Haiku chose 42. Gemini leaned towards 72.

Even more intriguing, all three models reflected human-like biases in other numbers they selected, even at high temperatures.

They all avoided extreme values; Claude never chose numbers above 87 or below 27, with both being outliers. Double digits were deliberately avoided: no 33, 55, or 66, yet 77 appeared (ending in 7). Almost no round numbers were chosen, although Gemini once, at the highest temperature, picked 0.

Why is this the case? AIs aren’t human! Why would they care about what seems random? Have they achieved consciousness and is this how they reveal it?

No. The explanation, as often with such phenomena, is that we are anthropomorphizing too much. These models don’t care about randomness. They don’t grasp what “randomness” means! They respond the same way they tackle all queries: by referencing their training data and reproducing what was most frequently associated with a prompt resembling “pick a random number.” The more frequently it appears, the more often the model replicates it.

Where in their training data would 100 appear if almost no one ever responds that way? For the AI, 100 might not seem like a valid answer. With no genuine reasoning capability and no comprehension of numbers, it can only respond like the stochastic parrot it is. (Similarly, they often falter at simple arithmetic, like multiplying a few numbers; after all, how probable is it that their training data includes phrases like “112*894*32=3,204,096”? Though newer models recognize arithmetic problems and delegate them to a subroutine.)

It’s an instructive example of LLM behavior and the pseudo-humanity they can seem to exhibit. In every interaction with these systems, one must remember that they have been trained to emulate human behavior, even if unintentionally. Pseudanthropy, thus, becomes challenging to evade or mitigate.

Although I wrote in the headline that these models “think they’re people,” it’s somewhat misleading. As frequently noted, they don’t think at all. Their responses, however, always imitate humans, without needing to understand or think. Whether tasked with a chickpea salad recipe, investment advice, or a random number, the process remains identical. The results feel human because they are derived from human-generated content and remixed—for your convenience, and of course, big AI’s profitability.

No comments

Leave a reply

Please enter your comment!
Please enter your name here

Exit mobile version