Towards Real-World Fact-Checking with Large Language Models.

Misinformation poses a growing threat to our society. It has a severe impact on public health by promoting fake cures or vaccine hesitancy, and it is used as a weapon during military conflicts, political elections, and crisis events to spread fear and distrust. Harmful misinformation is overwhelming human fact-checkers, who cannot keep up with the quantity of information to verify online. There is a strong potential for automated Natural Language Processing (NLP) methods to assist them in their tasks [8]. Real-world fact-checking is a complex task, and existing datasets and methods tend to make simplifying assumptions that limit their applicability to real-world, often ambiguous, claims [3, 6]. Image, video, and audio content are now dominating the misinformation space, with 80% of fact-checked claims being multimedia in 2023 [1]. When confronted with visual misinformation, human fact-checkers dedicate a significant amount of time not only to debunk the claim but also to identify accurate alternative information about the image, including its provenance, source, date, location, and motivation, a task that we refer to as image contextualization [9].

Furthermore, the core focus of current NLP research for fact-checking has been on identifying evidence and predicting the veracity of a claim. People’s beliefs, however, often do not depend on the claim and the rational reasoning but on credible content that makes the claim seem more reliable, such as scientific publications [4, 5] or visual content that was manipulated or stems from unrelated contexts [1, 2, 9]. To combat misinformation, we need to show (1) “Why was the claim believed to be true?”, (2) “Why is the claim false?”, (3) “Why is the alternative explanation correct?” [7]. In this talk, I will zoom into two critical aspects of such misinformation supported by credible though misleading content.

Firstly, I will present our efforts to dismantle misleading narratives based on fallacious interpretations of scientific publications [4, 5]. On the one hand, we discover a strong ability of LLMs to reconstruct and, hence, explain fallacious arguments based on scientific publications. On the other hand, we make the concerning observation that LLMs tend to support false scientific claims when paired with fallacious reasoning [5].

Secondly, I will show how we can use state-of-the-art multi-modal large language models to (1) detect misinformation based on visual content [2] and (2) provide strong alternative explanations for the visual content. I will conclude this talk by showing how LLMs can be used to support human fact-checkers for image contextualization [9].

(See the abstract pdf for references)

Identifier
Source https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/4334
Metadata Access https://tudatalib.ulb.tu-darmstadt.de/oai/openairedata?verb=GetRecord&metadataPrefix=oai_datacite&identifier=oai:tudatalib.ulb.tu-darmstadt.de:tudatalib/4334
Provenance
Creator Gurevych, Iryna
Publisher TU Darmstadt
Contributor TU Darmstadt
Publication Year 2024
Rights Creative Commons Attribution 4.0; info:eu-repo/semantics/openAccess
OpenAccess true
Contact https://tudatalib.ulb.tu-darmstadt.de/page/contact
Representation
Language English
Resource Type Text
Format application/pdf
Discipline Other