Activities per year
Project Details
Description
Most of what we know, we know through the speech of others. It’s the oil of social interaction. Yet, speech is now a primary source of social dysfunction—online and offline. Trust in authoritative sources is breaking down. We no longer trust our government, public agencies, news sources, or those holding different political views. We are split into tribes that dislike one another intensely, and this reinforces group division and polarization. We live in a time of deepening inequality and oppression. Speech is used to maintain and expand this. How can we improve speech without regulation of speech? How can speech be free and good once more?
This is an interdisciplinary project which seeks to explain how public discourse can generate significant social and epistemic harms and to make recommendations as to how to mend them. We will focus on oppressive speech and how it spreads online. Oppressive speech is speech that demeans, threatens, harms, silences and dis-empowers people as part of social systems of harm by virtue of belonging to a vulnerable group (e.g. race, gender, ethnicity, nationality, religion, sexual orientation, gender identity, disability, caste, immigration status, etc.). We will analyse how specific harms are brought about in the domain of hate speech about gender. Our case study will focus on how sexist and misogynistic speech in incel (involuntary celibate) communities normalises hostility and violence towards women. We will show how these harms are further exacerbated online through echo chambers where users are incentivised to become radicalized.
We will explain these harms by combining the complementary strengths of philosophy of language, socio-political theories of social (in)justice, and theories of social change. An interdisciplinary approach is essential to understand how speech can shift norms and re-entrench oppressive structures, and how digital environments contribute to amplify and normalize harmful speech. The single greatest problem posed today to the democratic norms of free speech and tolerance is the manner in which oppressive forces manipulate people’s beliefs and attitudes and spread hate using social media. If we are to oppose these actors, we must understand and model their behaviours so that we can develop tools to resist them. The overarching question guiding this project is concerned with what harmful content should be combatted and how. The project will provide philosophical guidance and policy proposals by tackling the following four aims and objectives.
This is an interdisciplinary project which seeks to explain how public discourse can generate significant social and epistemic harms and to make recommendations as to how to mend them. We will focus on oppressive speech and how it spreads online. Oppressive speech is speech that demeans, threatens, harms, silences and dis-empowers people as part of social systems of harm by virtue of belonging to a vulnerable group (e.g. race, gender, ethnicity, nationality, religion, sexual orientation, gender identity, disability, caste, immigration status, etc.). We will analyse how specific harms are brought about in the domain of hate speech about gender. Our case study will focus on how sexist and misogynistic speech in incel (involuntary celibate) communities normalises hostility and violence towards women. We will show how these harms are further exacerbated online through echo chambers where users are incentivised to become radicalized.
We will explain these harms by combining the complementary strengths of philosophy of language, socio-political theories of social (in)justice, and theories of social change. An interdisciplinary approach is essential to understand how speech can shift norms and re-entrench oppressive structures, and how digital environments contribute to amplify and normalize harmful speech. The single greatest problem posed today to the democratic norms of free speech and tolerance is the manner in which oppressive forces manipulate people’s beliefs and attitudes and spread hate using social media. If we are to oppose these actors, we must understand and model their behaviours so that we can develop tools to resist them. The overarching question guiding this project is concerned with what harmful content should be combatted and how. The project will provide philosophical guidance and policy proposals by tackling the following four aims and objectives.
Status | Active |
---|---|
Effective start/end date | 1/08/23 → 31/07/25 |
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.
Press/Media
-
New Work in Philosophy (blog by Marcus Arvan) - The challenges of regulating online speech (Policy@Manchester)
25/08/23
1 item of Media coverage
Press/Media: Other
Activities
-
What does freedom of speech require today?
Popa-Wyatt, M. (Participant)
11 Jun 2024Activity: Participating in or organising event(s) › Participating in a conference, workshop, exhibition, performance, inquiry, course etc › Research
-
Moral Language Workshop
Popa-Wyatt, M. (Participant)
11 Dec 2023Activity: Participating in or organising event(s) › Participating in a conference, workshop, exhibition, performance, inquiry, course etc › Research
-
Kings College London
Popa-Wyatt, M. (Participant)
9 Nov 2023Activity: Participating in or organising event(s) › Participating in a conference, workshop, exhibition, performance, inquiry, course etc › Research