Computational propaganda, the act of manipulating public opinion through online methods, is a significant threat to global security, democracy and to us as individuals. Our ability to be able to identify these online threats, therefore, is of paramount importance. One of the methods for conducting this form of manipulation is through the use of automated accounts or âbotsâ on social media and whilst the term bot is used widely online, in the media and in academic work, there are many competing definitions and uses which contribute to a difficulty in identifying when they are being used to mislead. The purpose of this research is to highlight the many issues inherent in the study of âbotsâ, focusing on the difficulty of bot detection, how that means that users are exposed when it comes to having to identify them and finally how the term is used online and how it is applied. This research is split into 3 separate papers that each explore the area of bot detection by Twitter users in increasing detail. The first section assesses the different approaches and methods for bot detection in order to understand both the difficulty associated with the task but also how effective these approaches are in producing reproducible results that can be replicated across studies. Having established the difficulty and the lack of reliability inherent in these bot detection approaches the second section then looks at how effective users are at being able to actually identify bots and examples of misinformation. This is important as it allows us to understand how good users are at identifying when they are being misled. In this instance, it is found that that users are not particularly good at identifying malicious automation. This in turn leads us to the final section, which attempts to answer the question posed by section 2, namely, if users arenât identifying bots but are accusing accounts using the term âbotâ, is there another motivation at play? Here it is found that rather than focusing on signs of automation, there is likely a political component to accusations and that the term is used more as an insult than a genuine accusation. It is hoped that this research can be used as a starting point for further research into a number of different areas. Firstly, to highlight how exposed users are to manipulation online in light of a lack of clear support from social media companies. Secondly, to further highlight the dangers of online manipulation, and lastly, to contribute to the literature around online hostility.
- Social Media
- Disinformation
- Bots
- Twitter
- X
What We Talk About When We Talk About Bots: Advancing Our Understanding of Bot Detection
Beatson, O. (Author). 1 Aug 2024
Student thesis: Phd