×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

This article presents a comparative analysis of the ability of two large
language model (LLM)-based chatbots, ChatGPT and Bing Chat, recently rebranded
to Microsoft Copilot, to detect veracity of political information. We use AI
auditing methodology to investigate how chatbots evaluate true, false, and
borderline statements on five topics: COVID-19, Russian aggression against
Ukraine, the Holocaust, climate change, and LGBTQ+ related debates. We compare
how the chatbots perform in high- and low-resource languages by using prompts
in English, Russian, and Ukrainian. Furthermore, we explore the ability of
chatbots to evaluate statements according to political communication concepts
of disinformation, misinformation, and conspiracy theory, using
definition-oriented prompts. We also systematically test how such evaluations
are influenced by source bias which we model by attributing specific claims to
various political and social actors. The results show high performance of
ChatGPT for the baseline veracity evaluation task, with 72 percent of the cases
evaluated correctly on average across languages without pre-training. Bing Chat
performed worse with a 67 percent accuracy. We observe significant disparities
in how chatbots evaluate prompts in high- and low-resource languages and how
they adapt their evaluations to political communication concepts with ChatGPT
providing more nuanced outputs than Bing Chat. Finally, we find that for some
veracity detection-related tasks, the performance of chatbots varied depending
on the topic of the statement or the source to which it is attributed. These
findings highlight the potential of LLM-based chatbots in tackling different
forms of false information in online environments, but also points to the
substantial variation in terms of how such potential is realized due to
specific factors, such as language of the prompt or the topic.

Click here to read this post out
ID: 627786; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: Dec. 21, 2023, 7:31 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 15
CC:
No creative common's license
Comments: