AI Learns to Think Like Humans: A Revolution in Machine Learning


AI memristor technology artificial neural network concept


Georgia Tech researchers are evolving neural networks to mimic human decision-making by training them to exhibit variability and confidence in their choices, similar to how humans operate, as demonstrated in their study published in Nature Human Behavior. Their model, RTNet, not only matches human performance in recognizing noisy digits, but also applies human traits such as confidence and evidence accumulation, improving both accuracy and reliability. Credit: SciTechDaily.com


Georgia Tech researchers have developed a neural network, RTNet, that mimics human decision-making processes, including confidence and variability, thereby improving its reliability and precision in tasks such as number recognition.



Humans make nearly 35,000 decisions every day, from determining whether it’s safe to cross the road to choosing what to eat for lunch. Each decision involves weighing options, recalling similar past situations, and being reasonably confident that you’re making the right choice. What might seem like a quick decision is actually the result of collecting data from the environment. Furthermore, the same person can make different decisions in identical scenarios at different times.


Neural networks do the opposite: they make the same decisions every time. Now, Georgia Tech researchers in the lab of Associate Professor Dobromir Rahnev are training them to make more human-like decisions. This science of human decision-making is just beginning to be applied to machine learningBut developing a neural network even closer to the real human brain could make it more reliable, the researchers say.



In an article by Nature Human BehaviorA team from the School of Psychology reveals a new neural network trained to make decisions similar to humans.


Decoding decision

“Neural networks make a decision without telling you whether they’re confident in their decision,” says Farshad Rafiei, a PhD in psychology at Georgia Tech. “That’s one of the key differences from how people make decisions.”



Large language models (LLMs), for example, are prone to hallucinations. When an LLM is asked a question to which it does not know the answer, it makes something up without recognizing the artifice. In contrast, most humans in the same situation will admit that they do not know the answer. Building a neural network closer to that of humans can prevent this duplicity and lead to more accurate answers.


Making the model

The team trained their neural network on handwritten digits from a famous computer dataset called MNIST and asked it to decipher each number. To determine how accurate the model was, they ran it on the original dataset and then added noise to the digits to make it harder for humans to distinguish. To compare the model’s performance with that of humans, they trained their model (along with three other models: CNet, BLNet, and MSDNet) on the original MNIST dataset without noise, but tested them on the noisy version used in the experiments and compared the results from the two datasets.


The researchers’ model relies on two key components: a Bayesian neural network (BNN), which uses probability to make decisions, and an evidence accumulation process that keeps track of the evidence for each choice. The BNN produces slightly different answers each time. As it gathers more evidence, the accumulation process can sometimes favor one choice and sometimes another. Once there is enough evidence to decide, the RTNet stops the accumulation process and makes a decision.


The researchers also timed the model’s decision-making speed to see if it follows a psychological phenomenon called the “speed-accuracy tradeoff” that causes humans to be less accurate when they have to make decisions quickly.


Once the model’s results were obtained, they compared them to those obtained by humans. Sixty Georgia Tech students looked at the same data set and shared their confidence in their decisions. The researchers found that the accuracy rate, response time, and confidence patterns were similar between humans and the neural network.


“Generally speaking, we don’t have enough human data in the existing computational literature, so we don’t know how people will behave when exposed to these images. This limitation hinders the development of models that accurately replicate human decision-making,” Rafiei said. “This work provides one of the largest datasets of humans responding to MNIST.”


Not only did the team’s model outperform all competing deterministic models, it also proved more accurate in high-speed scenarios because of another fundamental element of human psychology: RTNet behaves like humans. For example, people feel more confident when they make the right decisions. Without even having to train the model specifically to foster confidence, the model automatically applied it, Rafiei noted.


“If we try to make our models closer to the human brain, it will show in the behavior itself without fine-tuning,” he said.


The research team hopes to train the neural network on more diverse datasets to test its potential. They also hope to apply this BNN model to other neural networks to enable them to rationalize more like humans. Eventually, the algorithms will not only be able to mimic our decision-making abilities, but could even help alleviate some of the cognitive load of the 35,000 decisions we make every day.


Reference: “RTNet neural network exhibits signatures of human perceptual decision making” by Farshad Rafiei, Medha Shekhar, and Dobromir Rahnev, July 12, 2024, Nature Human Behavior.

DOI: 10.1038/s41562-024-01914-8


Post a Comment

0 Comments