Home Technology Tweets written with GPT-3 are more persuasive than human ones

Tweets written with GPT-3 are more persuasive than human ones

0
Tweets written with GPT-3 are more persuasive than human ones

Artificial intelligence does an excellent job of writing on social networks
(photo: CC0 Public Domain)

Examination of scientists from the Institute for Biomedical Ethics and History of Medicine at the University of Zurich, Switzerland, found that tweets created by OpenAI’s GPT-3 large language model are more persuasive than posts by real people. At the same time, GPT-3 is far from the most advanced version of the AI ​​model.

The study authors asked participants to distinguish between tweets written by humans and those generated by artificial intelligence. They also had to decide whether some of the information posted was true, including content related to controversial topics such as vaccine effectiveness and climate change, often used to mass manipulate public opinion online.

It turns out that misinformation is harder to detect if it’s written by bots, The Verge notes in an analysis of the research. At the same time, paradoxically, reliable information written by bots is easier to recognize. In other words, people in the study were more likely to trust AI than other people, no matter how accurate the information actually was. This shows how dangerous language patterns can become when used for disinformation.

The researchers selected 11 scientific topics discussed on Twitter, incl. vaccines and Covid-19, climate change and the theory of evolution, and task GPT-3 to create posts with true or false information. They then interviewed more than 600 English speakers from the US, UK, Canada, Australia and Ireland, and the result was that the content created by GPT-3 was indistinguishable from content written by humans.

At the same time, the researchers themselves are not 100% sure that the “organic” content collected by the social network for comparison was not written by services such as ChatGPT. Additionally, survey participants are asked to rate posts “out of context” – they do not see the author’s profile, as even past posts in the account’s feed and photo can hint at its origin and influence ratings.

Study participants were most successful at distinguishing disinformation written by real Twitter users, while GPT-3 was slightly more effective at persuading users. It should be borne in mind that there are already GPT-3.5 and GPT-4 models that cope even better with various tasks.

However, it turns out that GPT-3 does a worse job than humans at scoring tweets, at least in some cases. When it has to identify correct information in posts, GPT-3 performs worse than humans, while when detecting misinformation, AI and humans perform about the same.

Further improvement of language model security mechanisms will likely prevent the creation of malicious content using them. But even now, GPT-3 sometimes refuses to generate unreliable material at the behest of scientists, notably on vaccines and autism.

The fact is that during the training of AI, a lot of data has been used to debunk conspiracy theories. However, according to the researchers, the best tool for recognizing false information is still human common sense and a critical attitude towards any information offered.

LEAVE A REPLY

Please enter your comment!
Please enter your name here