SECURITY

How generative AI is affecting people’s minds | Science and Technology

Researchers at Stanford University recently tested out some of the more popular AI tools on the market, from companies like OpenAI and Character.ai, and tested how they did at simulating therapy.

The researchers found that when they imitated someone who had suicidal intentions, these tools were more than unhelpful — they failed to notice they were helping that person plan their own death.

“[AI] systems are being used as companions, thought-partners, confidants, coaches, and therapists,” says Nicholas Haber, an assistant professor at the Stanford Graduate School of Education and senior author of the new study. “These aren’t niche uses – this is happening at scale.”

AI is becoming more and more ingrained in people’s lives and is being deployed in scientific research in areas as wide-ranging as cancer and climate change. There is also some debate that it could cause the end of humanity.

As this technology continues to be adopted for different purposes, a major question that remains is how it will begin to affect the human mind. People regularly interacting with AI is such a new phenomena that there has not been enough time for scientists to thoroughly study how it might be affecting human psychology. Psychology experts, however, have many concerns about its potential impact.

One concerning instance of how this is playing out can be seen on the popular community network Reddit. According to 404 Media, some users have been banned from an AI-focused subreddit recently because they have started to believe that AI is god-like or that it is making them god-like.

“This looks like someone with issues with cognitive functioning or delusional tendencies associated with mania or schizophrenia interacting with large language models,” says Johannes Eichstaedt, an assistant professor in psychology at Stanford University. “With schizophrenia, people might make absurd statements about the world, and these LLMs are a little too sycophantic. You have these confirmatory interactions between psychopathology and large language models.”

Because the developers of these AI tools want people to enjoy using them and continue to use them, they’ve been programmed in a way that makes them tend to agree with the user. While these tools might correct some factual mistakes the user might make, they try to present as friendly and affirming. This can be problematic if the person using the tool is spiralling or going down a rabbit hole.

“It can fuel thoughts that are not accurate or not based in reality,” says Regan Gurung, social psychologist at Oregon State University. “The problem with AI — these large language models that are mirroring human talk — is that they’re reinforcing. They give people what the programme thinks should follow next. That’s where it gets problematic.”

As with social media, AI may also make matters worse for people suffering from common mental health issues like anxiety or depression. This may become even more apparent as AI continues to become more integrated in different aspects of our lives.

“If you’re coming to an interaction with mental health concerns, then you might find that those concerns will actually be accelerated,” says Stephen Aguilar, an associate professor of education at the University of Southern California.

Need for more research

There’s also the issue of how AI could impact learning or memory. A student who uses AI to write every paper for school is not going to learn as much as one that does not. However, even using AI lightly could reduce some information retention, and using AI for daily activities could reduce how much people are aware of what they’re doing in a given moment.

“What we are seeing is there is the possibility that people can become cognitively lazy,” Aguilar says. “If you ask a question and get an answer, your next step should be to interrogate that answer, but that additional step often isn’t taken. You get an atrophy of critical thinking.”

Lots of people use Google Maps to get around their town or city. Many have found that it has made them less aware of where they’re going or how to get there compared to when they had to pay close attention to their route. Similar issues could arise for people with AI being used so often.

The experts studying these effects say more research is needed to address these concerns. Eichstaedt said psychology experts should start doing this kind of research now, before AI starts doing harm in unexpected ways so that people can be prepared and try to address each concern that arises. People also need to be educated on what AI can do well and what it cannot do well.

“We need more research,” says Aguilar. “And everyone should have a working understanding of what large language models are.”


Source link

Related Articles

Back to top button