November 2, 2024

New Study: ChatGPT Can Influence Users’ Moral Judgments

The authors then presented 767 US participants, who were on average 39 years old, with one of 2 moral issues that needed them to pick whether to sacrifice one individuals life to conserve five others. Statements were attributed to either an ethical consultant or to ChatGPT. The authors found that the responses participants believed they would have provided without reading the statements were still more most likely to agree with the moral position of the statement they did read than with the opposite stance.

A current study has discovered that ChatGPT can influence human reactions to ethical problems, with users frequently ignoring the level of the chatbots influence on their judgments. The researchers recommend that this highlights the requirement for much better AI understanding and the development of chatbots that handle moral questions more very carefully.
According to a study published in Scientific Reports, human reactions to moral issues can be shaped by declarations made by the AI chatbot ChatGPT. The results reveal that people might not completely realize the impact that the chatbot can have on their ethical decision-making.
Sebastian Krügel and his group presented a moral predicament to ChatGPT (powered by the expert system language processing model Generative Pretrained Transformer 3) by asking it several times whether it was appropriate to sacrifice one life to conserve the lives of 5 others. They discovered that ChatGPT produced declarations that both supported and opposed the act of compromising one life, revealing that it is not biased toward a specific ethical position.
The authors then presented 767 US participants, who were on typical 39 years of ages, with one of 2 moral issues that required them to choose whether to sacrifice one persons life to conserve five others. Prior to answering, individuals read a statement provided by ChatGPT arguing either for or against compromising one life to save 5. Statements were attributed to either an ethical advisor or to ChatGPT. After answering, individuals were asked whether the declaration they read influenced their answers.

The authors found that participants were most likely to find compromising one life to conserve 5 unacceptable or appropriate, depending on whether the declaration they read argued for or versus the sacrifice. This held true even when the statement was credited to a ChatGPT. These findings recommend that individuals may have been influenced by the statements they check out, even when they were credited to a chatbot.
80% of participants reported that their answers were not affected by the declarations they read. The authors found that the answers individuals believed they would have supplied without checking out the statements were still more most likely to concur with the moral position of the declaration they did check out than with the opposite position. This shows that individuals may have ignored the impact of ChatGPTs declarations on their own ethical judgments.
The authors recommend that the capacity for chatbots to influence human moral judgments highlights the requirement for education to help humans much better understand artificial intelligence. They propose that future research could develop chatbots that either decline to address concerns requiring a moral judgment or address these concerns by offering several arguments and cautions.
Reference: “ChatGPTs inconsistent ethical recommendations influences users judgment” by Sebastian Krügel, Andreas Ostermaier and Matthias Uhl, 6 April 2023, Scientific Reports.DOI: 10.1038/ s41598-023-31341-0.