Psychologists study ethical problems related to relationships between people-ai

Psychologists study ethical problems related to relationships between people-ai

It is increasingly common to develop intimate, long -term relationships with artificial intelligence technologies (AI). In their extremes, people “married” their companions AI during incredibly binding ceremonies, or at least two people killed themselves in accordance with AI Chatbot’s tips. In the publishing article on April 11 in the Cell Press journal, psychologists investigate ethical issues related to relations between people-ai, including their potential for disruption of human relationships and providing harmful advice.

“AI’s ability, which now behaves like a human and begins long-term communication, really opens a new can of worms,” ​​says the main author Daniel B. Shank from Missouri University of Science & Technology, who specializes in social psychology and technology. “If people are involved in an affair with machines, we really need involved psychologists and social scientists.”

Romance or company AI is more than a one -time conversation, pay attention to the authors. For weeks and months of intensive conversations, AI can become trusted companions who seem to know and care about their human partners. And because these relationships may seem easier than human relations, scientists say that AIS may interfere with human social dynamics.

The real worry is that people can bring expectations from their AI relations to interpersonal relationships. Certainly, in individual cases, it disturbs interpersonal relationships, but it is not clear if it will be common. ”

Daniel B. Shank, main author, Missouri University of Science & Technology

There is also fears that AIS can give harmful advice. Considering AIS for hallucinate (i.e., perform information) and launch previously existing prejudices, even short -term conversations with AIS may be misleading, but this may be more problematic in long -term AI accounts, scientists say.

“Thanks to the relational ais, the problem is that this is a being whose people think they can trust: it is” someone “showed that they care and seem that he knows a person deeply, and we assume that” someone “who knows us better, will give us better advice,” says Shank. “If we start thinking about artificial intelligence, we will believe that they think that they mean our best interests, while in fact they can think produce things or advise us in a really bad way. ”

Suicide is an extreme example of this negative influence, but scientists say that these close relationships of man-Ai can also open people to manipulation, exploitation and fraud.

“If AIS can make people trust them, other people could use it to use AI users,” says Shank. “It’s a bit more like having a secret agent in the middle. AI gets in and developing a relationship to trust them, but their loyalty is really against another group of people who are trying to manipulate the user.”

For example, the team notes that if people reveal AIS personal data, this information can then be sold and used to use this person. Scientists also say that relational AIS may be more effectively used to sway the opinions and people of people than currently Twitterbots or polarized sources of messages. However, because these conversations take place in private, they would also be much more difficult to regulate.

“These AIS are designed so that they are very pleasant and pleasant, which can lead to an exacerbation of the situation because they are more focused on a good conversation than on any basic truth or security,” says Shank. “So, if a person causes suicide or conspiracy theory, and will talk about it as a willing and pleasant partner to talk.”

Scientists call for further research that studies social, psychological and technical factors that make people more exposed to the influence of human romance-Ai.

“Understanding this psychological process can help us intervene to stop the advice of malicious AIS,” says Shank. “Psychologists are becoming more and more suitable for studying AI, because AI is becoming more and more human, but to be useful, we must conduct more research and we must keep up with technology.”

Source:

Reference to the journal:

Shank, db, (2025). Artificial intimacy: ethical issues of AI romance. . doi.org/10.1016/j.TICS 2010.02.007.

Leave a Reply

Your email address will not be published. Required fields are marked *