Characterizing and Detecting Propaganda-Spreading Accounts on Telegram

Authors: 

Klim Kireev, EPFL, MPI-SP Max Plank Institute for Security and Privacy; Yevhen Mykhno, unaffiliated; Carmela Troncoso, EPFL, MPI-SP Max Plank Institute for Security and Privacy; Rebekah Overdorf, Ruhr University Bochum (RUB), Research Center Trustworthy Data Science and Security in University Alliance Ruhr, University of Lausanne

Abstract: 

Information-based attacks on social media, such as disinformation campaigns and propaganda, are emerging cybersecurity threats. The security community has focused on countering these threats on social media platforms like X and Reddit. However, they also appear in instant-messaging social media platforms such as WhatsApp, Telegram, and Signal. In these platforms, information-based attacks primarily happen in groups and channels, requiring manual moderation efforts by channel administrators. We collect, label, and analyze a large dataset of more than 17 million Telegram comments and messages. Our analysis uncovers two independent, coordinated networks that spread pro-Russian and pro-Ukrainian propaganda, garnering replies from real users. We propose a novel mechanism for detecting propaganda that capitalizes on the relationship between legitimate user messages and propaganda replies and is tailored to the information that Telegram makes available to moderators. Our method is faster, cheaper, and has a detection rate (97.6%) 11.6 percentage points higher than human moderators after seeing only one message from an account. It remains effective despite evolving propaganda.

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.