{"id":37967,"date":"2025-05-21T15:46:23","date_gmt":"2025-05-21T15:46:23","guid":{"rendered":"https:\/\/publicknowledge.org\/?p=37967"},"modified":"2025-11-25T06:54:44","modified_gmt":"2025-11-25T06:54:44","slug":"what-does-research-tell-us-about-technology-platform-censorship","status":"publish","type":"post","link":"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/","title":{"rendered":"What Does Research Tell Us About Technology Platform \u201cCensorship\u201d?"},"content":{"rendered":"\n<p>Like many other stakeholders, Public Knowledge is preparing a response to a request for public comment from the Federal Trade Commission on the topic of \u201ctechnology platform censorship.\u201d The <a href=\"https:\/\/www.ftc.gov\/news-events\/news\/press-releases\/2025\/02\/federal-trade-commission-launches-inquiry-tech-censorship\">FTC\u2019s request<\/a> encourages respondents to reply to a series of questions by recounting ways platforms like Facebook, YouTube, and X (previously Twitter) may have disproportionately \u201cdenied or degraded\u201d users\u2019 access to services based on the content of the users\u2019 speech or affiliations. The request appears to be part of an effort by the FTC, Federal Communications Commission, and Department of Justice to break up a &#8220;<a href=\"https:\/\/x.com\/BrendanCarrFCC\/status\/1907939965251764294\">censorship cartel<\/a>&#8221; that Trump administration officials claim systematically censors Americans\u2019 political speech. Based on the submissions so far, the FTC can expect to receive hundreds, if not thousands, of anecdotal \u2013 and many anonymous \u2013 comments that staff will probably not be able to verify actually occurred.<\/p>\n\n\n\n<p>To ensure our own <a href=\"https:\/\/publicknowledge.org\/policy\/public-knowledge-ftc-comments-on-technology-platform-censorship\/\">comments to the FTC<\/a> are rooted in evidence, we reviewed eight years of research on political content moderation. Our literature review included research studies and white papers from academics, journalists, whistleblowers, social scientists, and platforms going back to 2018. Our goal for this post is to provide a summary of this research and the conclusions we draw from it.&nbsp;<\/p>\n\n\n\n<p><strong>Challenges of Researching Platform Content Moderation<\/strong><\/p>\n\n\n\n<p>Unfortunately, research investigating questions about algorithmic curation and bias \u2013 and <a href=\"https:\/\/www.science.org\/content\/article\/five-biggest-challenges-facing-misinformation-researchers\">content moderation in general<\/a> \u2013 has been constrained by these challenges:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Limited collaboration between platforms and researchers;&nbsp;<\/li>\n\n\n\n<li>The difficulty of defining and quantifying bias in research design;<\/li>\n\n\n\n<li>Frequent changes to the platforms\u2019 feed-ranking algorithms;&nbsp;<\/li>\n\n\n\n<li>Controlling for platform features such as content personalization; and,&nbsp;<\/li>\n\n\n\n<li>In the absence of platform data, the need to work with user histories or web-scraped data that may reflect the user\u2019s own preferences (such as channels or subscriptions).<\/li>\n<\/ul>\n\n\n\n<p>If anything, the technology platforms have compounded these challenges over time by restricting access to their data: Meta <a href=\"https:\/\/www.techpolicy.press\/the-demise-of-crowdtangle-and-what-it-means-for-independent-technology-research\/\">unwound<\/a> its CrowdTangle tool (researchers consistently say the company\u2019s new \u201ccontent library\u201d does <em>not<\/em> provide the same insight) and X has <a href=\"https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/15365042241252125\">restricted access<\/a> and increased application programming interface, or API, fees for researchers. These barriers make it easier for conspiracy theories about content moderation to emerge and spread. Despite these challenges, clear themes emerged from the body of research.&nbsp;<\/p>\n\n\n\n<p><strong>Themes from Research Regarding Political Content Moderation<\/strong><\/p>\n\n\n\n<p>Our secondary research review showed these dominant themes (see the subsequent sections of this post for links to the relevant studies):<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>There is little empirical evidence that platforms disproportionately deny or degrade conservative users\u2019 access to services or that conservative voices or posts are disproportionately moderated due to their speech or affiliations.&nbsp;<\/li>\n\n\n\n<li>If anything, platform algorithms advantage conservative, right-wing, or populist content because such content tends to be highly engaging, and because there are structural advantages for right-wing or populist political influencers on technology platforms.&nbsp;<\/li>\n\n\n\n<li>Some of the characteristics that make this content more engaging also make it more likely to violate platform content moderation policies. So when conservative or populist content is disproportionately moderated, it is because it is more likely to violate the platforms\u2019 community standards and terms of service. That is, asymmetric moderation results from asymmetric user behavior. This dynamic crosses international borders.&nbsp;<\/li>\n\n\n\n<li>To the extent that platforms do disproportionately deny or degrade service based on the content of speech (even if it does not violate platform policies), it overwhelmingly impacts marginalized communities, including people of color, LGBTQ+ people, religious minorities, and women. This may be due to how content policies are crafted, bias in moderation algorithms and training sets, and\/or automated content moderation systems that do not understand cultural context or language cues. For technology platform users in general, these automated systems are incapable of understanding political motivation or affiliation.&nbsp;<\/li>\n<\/ul>\n\n\n\n<p>Note: In order to focus on dominant themes, we didn\u2019t include every study we reviewed in this post. We encourage readers to use the links provided to understand the methodology in each study, and the citations within each study to access more information and resources.&nbsp;<\/p>\n\n\n\n<p><strong>There Is Little Empirical Evidence That Conservative Voices Are Over-Moderated&nbsp;<\/strong><\/p>\n\n\n\n<p>Researchers at New York University Stern School of Business\u2019s Center for Business and Human Rights produced what may be the most <a href=\"https:\/\/bhr.stern.nyu.edu\/wp-content\/uploads\/2024\/02\/NYUFalseAccusation_2.pdf\">comprehensive review<\/a> of available research (as of February 2021) addressing the claim that platforms are biased in their moderation of conservatives. These researchers also conducted various analyses and rankings using Facebook\u2019s CrowdTangle tool in the 11-month run-up to the 2020 US election. They found that right-leaning Facebook pages contained the most-engaged-with posts; right-wing media pages trounced mainstream media pages in engagement; and Donald Trump beat all other US political figures on the same measure. Independent studies by NewsWhip and Media Matters for America, cited in the same review, also showed that right-leaning Facebook pages and media publications outperformed left-leaning pages or performed similarly. The researchers also recounted a study showing that on YouTube, \u201cpartisan right\u201d channels like Fox News and The Daily Wire performed similarly or better than \u201cpartisan left\u201d channels, such as MSNBC and Vox, on key measures.&nbsp;<\/p>\n\n\n\n<p>Research also shows that outcomes users attribute to \u201cbias\u201d may actually be the result of a neutral product design. One <a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3274417\">research study about Google Search<\/a> in 2018 noted, it\u2019s \u201cdifficult to tease apart confounds inherent to the scale and complexity of the web, the constantly evolving metrics that search engines optimize for, and the efforts of search engines to prevent gaming by third parties.\u201d This study found that the direction and magnitude of political \u201clean\u201d in test subjects\u2019 search engine results pages (SERPs) depended largely on the input query, not the self-reported ideology of the user. It also varied by component type on the SERP (e.g. &#8220;answer boxes&#8221;), and variable ranking decisions by the platform. If anything, \u201cGoogle\u2019s rankings shifted the average lean of SERPs to the right.\u201d Another <a href=\"https:\/\/datasociety.net\/library\/searching-for-alternative-facts\/\">study of Google Search<\/a> from 2018 showed that conservative users of the platform did not fully realize how dependent their results were on the phrases they used in their search queries. Nor did they have a consistent or accurate understanding of the mechanisms by which the company returns search results. (In the authors\u2019 view, there\u2019s no reason to believe this would differ for liberal users.) A <a href=\"https:\/\/www.economist.com\/graphic-detail\/2019\/06\/08\/google-rewards-reputable-reporting-not-left-wing-politics\">study published in <em>The Economist<\/em><\/a> in 2019 showed that Google\u2019s search algorithm mostly rewarded reputable reporting. That is, the most represented sources were center-left and center-right, and results indicating \u201cbias\u201d were actually the result of <em>the user\u2019s<\/em> search term.&nbsp;<\/p>\n\n\n\n<p><strong>If Anything, Platforms\u2019 Engagement-Based Design Advantages Right-Wing Content<\/strong><\/p>\n\n\n\n<p>The single biggest driver of the societal impact of platforms\u2019 content moderation is rooted in human nature: People are wired to pay more attention to information that generates a strong reaction. Research studies have shown that engagement on social media is associated with, for example, increased <a href=\"https:\/\/pure.uva.nl\/ws\/files\/66893725\/20563051211059710.pdf\">negativity and anger<\/a>; <a href=\"https:\/\/news.tulane.edu\/pr\/rage-clicks-study-shows-how-political-outrage-fuels-social-media-engagement\">outrage and confrontation<\/a>; or <a href=\"https:\/\/spssi.onlinelibrary.wiley.com\/doi\/10.1111\/sipr.12091\">incivility and conflict<\/a>. Platforms must maximize engagement (e.g., posting, dwelling, liking, commenting, sharing) to optimize profit because of their advertising-based business model. As a result, even modest tweaks to algorithms to increase engagement (such as one Facebook made in 2018 to emphasize \u201cmeaningful social interactions\u201d) can end up <a href=\"https:\/\/www.cnn.com\/2021\/10\/27\/tech\/facebook-papers-meaningful-social-interaction-news-feed-math\/index.html\">amplifying provocative and negative content<\/a>. And as we describe in the next section, research consistently shows that right-wing sources use this type of content more often, and more effectively, on digital platforms.<\/p>\n\n\n\n<p>A <a href=\"https:\/\/www.economist.com\/graphic-detail\/2020\/08\/01\/twitters-algorithm-does-not-seem-to-silence-conservatives\">study published in <em>The Economist<\/em><\/a> in 2020 aimed to determine what content then-Twitter\u2019s algorithm promoted. The researchers found that compared to its previous chronological feed, Twitter\u2019s new \u201crelevant\u201d recommendation engine favored inflammatory tweets that are more emotive and more likely to have come from untrustworthy or hyper-partisan websites. Another <a href=\"https:\/\/www.economist.com\/graphic-detail\/2020\/09\/10\/facebook-offers-a-distorted-view-of-american-news\">study <em>The Economist <\/em>published<\/a> later that year focused on Facebook. It showed that the most prominent news sources on Facebook are significantly more slanted to the right than those found elsewhere on the web, and that right-wing content from Fox News and Breitbart has more Facebook interactions than left-leaning news sites.&nbsp;<\/p>\n\n\n\n<p>The aforementioned <a href=\"https:\/\/bhr.stern.nyu.edu\/wp-content\/uploads\/2024\/02\/NYUFalseAccusation_2.pdf\">report from NYU\u2019s Stern Center for Business and Human Rights<\/a> also concluded that social platforms\u2019 algorithms often amplify right-wing voices, granting them greater reach than left-leaning or nonpartisan content creators. The authors analyzed engagement data and case studies of content related to high-profile incidents, finding no sign of anti-conservative bias in enforcement, even around contentious events like the January 6 riot at the US Capitol. They noted that right-leaning content frequently dominates user engagement metrics, largely due to Facebook\u2019s algorithmic promotion systems, which reward content that provokes strong reactions. In other words, because Facebook\u2019s feed algorithm optimizes for engagement, and outrage-driven or partisan posts often generate more clicks and shares, conservative pages that specialize in such content tend to benefit disproportionately.<\/p>\n\n\n\n<p>Media Matters has tracked engagement on social media through several studies dating back to 2018. These studies undermine the idea that Facebook, in particular, is biased in its content moderation and reinforce the idea that platform algorithms favor engagement above all. One <a href=\"https:\/\/www.mediamatters.org\/facebook\/new-study-finds-facebook-not-censoring-conservatives-despite-their-repeated-attacks\">nine-month study<\/a> completed in 2020 found that partisan content (both left and right) did better than nonpartisan content on Facebook, but \u201cright-leaning pages consistently earned more average weekly interactions than either left-leaning or ideologically nonaligned pages.\u201d The findings were similar to those in studies Media Matters conducted in <a href=\"https:\/\/www.mediamatters.org\/facebook\/study-analysis-top-facebook-pages-covering-american-political-news\">2018<\/a> and <a href=\"https:\/\/www.mediamatters.org\/facebook\/study-facebook-still-not-censoring-conservatives\">2019<\/a>. Their <a href=\"https:\/\/www.mediamatters.org\/facebook\/facebook-tweaked-its-news-feed-algorithm-and-right-leaning-pages-are-reaping-benefits\">research in 2021<\/a> showed these effects were actually <em>compounded<\/em> after Facebook tweaked its algorithm to reduce the prominence of news, civic, and health information and video became more popular on the platform.<\/p>\n\n\n\n<p>A study from <a href=\"https:\/\/www.politico.com\/news\/2020\/10\/26\/censorship-conservatives-social-media-432643\">Politico and the Institute for Strategic Dialogue<\/a> in 2020 showed that \u201cright-wing social media influencers, conservative media outlets, and other GOP supporters dominate online discussions\u201d around the Black Lives Matter movement and voter fraud, including in Facebook posts, Instagram feeds, Twitter messages, and conversations on two popular message boards.<\/p>\n\n\n\n<p>A study from the <a href=\"https:\/\/www.brookings.edu\/articles\/echo-chambers-rabbit-holes-and-ideological-bias-how-youtube-recommends-content-to-real-users\/\">Brookings Institution<\/a> in 2022 focused on YouTube, one of the first platforms to offer \u201crecommendations\u201d to users, also found that regardless of the ideology of the study participant, the algorithm pushes all users in a moderately conservative direction.&nbsp;<\/p>\n\n\n\n<p>Most publicly available data for Facebook shows that conservative news regularly ranks among the most popular content on the site, and Facebook has acknowledged that right-wing content excels at the engagement measures that drive algorithmic amplification. In the election year of 2020, study after study found that the Facebook posts with the most engagement in the United States \u2013 measured by likes, comments, shares, and reactions \u2013 were organic posts from conservative influencers outside the mainstream media. When asked about this dynamic, a <a href=\"https:\/\/www.politico.com\/news\/2020\/09\/26\/facebook-conservatives-2020-421146\">Facebook executive noted<\/a>, \u201cRight-wing populism is always more engaging\u201d and said that the content speaks to \u201can incredibly strong, primitive emotion\u201d by touching on such topics as \u201cnation, protection, the other, anger, fear.\u201d<\/p>\n\n\n\n<p>Twitter has also acknowledged that its algorithms favored right-wing content. In 2021, <a href=\"https:\/\/cdn.cms-twdigitalassets.com\/content\/dam\/blog-twitter\/official\/en_us\/company\/2021\/rml\/Algorithmic-Amplification-of-Politics-on-Twitter.pdf\">Twitter published its own study<\/a> that \u201creveal[ed] a remarkably consistent trend: In six out of seven countries studied, the mainstream political right enjoys higher algorithmic amplification than the mainstream political left.\u201d (This was <em>before<\/em> Elon Musk purchased the platform and rebranded it to X.) Germany was a notable exception. Twitter, at the time, acknowledged the results were problematic but <a href=\"https:\/\/www.theguardian.com\/technology\/2021\/oct\/22\/twitter-admits-bias-in-algorithm-for-rightwing-politicians-and-news-outlets\">could not determine<\/a> whether certain tweets received preferential treatment because of how the Twitter algorithm was constructed or because of how users interacted with it.<\/p>\n\n\n\n<p>Another <a href=\"https:\/\/journals.sagepub.com\/doi\/10.1177\/19401612241311886\">cross-national comparative study<\/a> based on Twitter in 26 countries, published in 2025, also found that this pattern extends internationally and that certain political ideologies are linked to a higher likelihood of spreading misinformation. Specifically, politicians associated with radical right-wing populist parties \u2013 characterized by exclusionary ideologies and hostile relations to democratic institutions \u2013 spread more online misinformation than their mainstream counterparts. The authors concluded that misinformation should be \u201cexamined as an aspect of party politics, serving as a strategy designed to mobilize voters against mainstream parties and democratic institutions.\u201d<\/p>\n\n\n\n<p>More recently, a <a href=\"https:\/\/globalwitness.org\/en\/campaigns\/digital-threats\/tiktok-and-x-recommend-pro-afd-content-to-non-partisan-users-ahead-of-the-german-elections\/\">study focused on the role of social media<\/a> in the February 2025 election in Germany showed that X, TikTok, and Instagram (a Meta platform) were all most likely to show right-wing content to nonpartisan users. Content shown across every platform tested displayed a right-leaning bias. This included both content from the accounts the researchers set out to follow, and content that was selected \u201cFor You\u201d by the platforms\u2019 recommender systems.&nbsp;<\/p>\n\n\n\n<p>Besides the lift from the algorithms, conservative elites may also gain greater engagement on technology platforms due to structural advantages in how they use these platforms. One sociologist and professor noted in her 2019 book, \u201c<a href=\"https:\/\/www.hup.harvard.edu\/books\/9780674972339\">The Revolution That Wasn\u2019t<\/a>,\u201d that \u201cthere is a lopsided digital activism gap that favors conservatives.\u201d For example, online participation is greater with middle- and upper-class movements than their working-class counterparts, and conservative activists tend to come from higher income levels than progressives. Conservative groups, therefore, have more time and resources to invest in content and engagement, and their simple, powerful messaging focused on \u201cfreedom\u201d and threats to America fits best with social media\u2019s short attention span and character limits.&nbsp;<\/p>\n\n\n\n<p>A <a href=\"https:\/\/www.pewresearch.org\/journalism\/2024\/11\/18\/americas-news-influencers\/\">nationally representative survey of Americans conducted by Pew Research<\/a> in 2024 shows another advantage that now accrues to conservative users: the growing popularity, distribution, and political orientation of news influencers. About one in five Americans now say they regularly get news from news influencers on social media. News influencers are defined as individuals who regularly post about current events and civic issues on social media and have at least 100,000 followers on any of the major social media platforms (Facebook, Instagram, TikTok, YouTube, and particularly X, which is the most common site for influencers to share content). According to Pew\u2019s research, more news influencers explicitly present a politically right-leaning orientation than a left-leaning one in their account bios, posts, websites, or media coverage. Influencers on Facebook are particularly likely to prominently express right-leaning views.<\/p>\n\n\n\n<p><strong>Right-Wing Content is More Likely To Violate Platforms\u2019 Community Standards&nbsp;<\/strong><\/p>\n\n\n\n<p>One of the most consistent themes in the research on content moderation of political content is that what users may perceive as \u201cbiased\u201d asymmetric moderation is actually the result of users\u2019 own asymmetric behavior. Specifically, the research shows that conservative, right-wing, and populist platform users (the term varies by research project) are more likely to violate the platforms\u2019 terms of service and\/or community standards. Many of the examples date from 2020 and 2021, when platforms evolved their content moderation policies in response to the COVID-19 pandemic and the 2020 US election, both of which became highly politicized. In the interest of public health, safety, and democratic participation, most platforms selected authoritative sources of information such as the World Health Organization, Centers for Disease Control, and local election offices to calibrate their content moderation, up- and down-rank user content, fact-check and label content, and direct people to the latest available information. (For more details by platform in regard to COVID-19, see our <a href=\"https:\/\/publicknowledge.org\/the-pandemic-proves-we-need-a-superfund-to-clean-up-misinformation-on-the-internet\/\">blog post<\/a>.) Users sharing information inconsistent with that of the authoritative sources selected by the platforms found themselves in violation of platform policies. Conspiracy theories, content that calls for violence against particular groups, and other forms of violative content incompatible with platform standards also resulted in disproportionate moderation.<\/p>\n\n\n\n<p>For example, a <a href=\"https:\/\/arxiv.org\/abs\/2407.16014\">study of 6,500 state legislators<\/a> on Facebook and Twitter during the tumultuous time in 2020 and early 2021 (e.g., the pandemic, the 2020 election, and the January 6 riot at the US Capitol) showed that state legislators could gain increased attention on both platforms by sharing unverified claims or using uncivil language such as insults or extreme statements. The results affirm that platform algorithms generally favor content likely to get a strong reaction. However, Republican legislators were significantly more likely to post \u201clow-credibility content\u201d on Facebook and Twitter than Democrats, and Republican legislators who posted low-credibility information were more likely to receive greater online attention than Democrats.&nbsp;<\/p>\n\n\n\n<p>A <a href=\"https:\/\/osf.io\/preprints\/psyarxiv\/vk5yj_v3\">new research report focused on X\u2019s Community Notes<\/a> program, now in preprint, examines whether there are partisan differences in the sharing of misleading information. The study is particularly relevant now that both Meta and TikTok have moved to community notes (user-sourced assessments of content) to add context to posts instead of third-party fact-checking partnerships. The researchers\u2019 abstract highlights that posts by Republicans are far more likely to be flagged as misleading compared to posts by Democrats, and not because Republicans are over-represented among X users. Their findings \u201cprovide strong evidence of a partisan asymmetry in misinformation sharing which cannot be attributed to political bias on the part of raters, and indicate that Republicans will be sanctioned more than Democrats even if platforms transition from professional fact-checking to Community Notes.\u201d<\/p>\n\n\n\n<p>One 2020 <a href=\"https:\/\/cbw.sh\/static\/pdf\/jiang-aaai20.pdf\">study used YouTube<\/a> as a lens to investigate whether the political leaning of a video plays a role in the moderation decisions for its associated comments. The researchers found that user comments <em>were<\/em> more likely to be moderated under right-leaning videos, but this difference is \u201cwell-justified\u201d because the videos and comments are <em>also<\/em> more likely to have characteristics that violate the platform\u2019s rules. These include extreme content that calls for violence or spreads conspiracy theories, or misinformation based on fact-checks. Or, the videos and comments have poor social engagement (such as a high \u201cdislike\u201d rate). Once these behavioral variables were balanced, there was no significant difference in moderation likelihood across the political spectrum.<\/p>\n\n\n\n<p>A <a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07942-8\">prominent study published in <em>Nature<\/em><\/a> showed that users estimated to be pro-Trump\/conservative were, in fact, more likely to be suspended from Facebook than those estimated to be pro-Biden\/liberal. However, this was because conservative users shared far more links to low-quality news sites \u2013 even when \u201cnews quality\u201d was determined by groups of only Republicans \u2013 and they had higher estimated likelihoods of being bots. As noted above, Facebook\u2019s recommendation algorithm maximizes for user engagement, and this study was one of several that found that misinformation content was more engaging to right-wing audiences. Facebook\u2019s algorithm also appeared to rank misinformation more highly for right-wing users. (In other words, Facebook\u2019s algorithm is doing what it is optimized to do: serve up more content that proves engaging to a particular audience.) The authors concluded that political asymmetry in moderation resulted from asymmetries in violative behavior, not politically biased content policies or political bias on the part of social media companies. This study was one of four that studied, with Facebook\u2019s cooperation, the impact of Facebook\u2019s recommendation algorithm during the 2020 US presidential election.&nbsp;<\/p>\n\n\n\n<p>Another group of data scientists and academic researchers who were given access to Facebook data regarding the impact of social media on elections and democracy in 2019 <a href=\"https:\/\/www.npr.org\/2020\/10\/05\/918520692\/facebook-keeps-data-secret-letting-conservative-bias-claims-persist\">noted<\/a> the same thing. They found that most of the high-profile examples of moderation of conservative content resulted from \u201cmore false and misleading content on the right&#8221; at a time when platforms were more aggressively moderating content related to elections. This researcher noted that, if anything, \u201cFacebook&#8217;s algorithms could also be helping more people see right-wing content that&#8217;s meant to evoke passionate reactions.\u201d<\/p>\n\n\n\n<p>Researchers from the Observatory on Social Media at Indiana University, in <a href=\"https:\/\/www.regulations.gov\/comment\/FTC-2025-0023-1297\">their own comments to the FTC<\/a>, described two studies they conducted that explored this question, one from 2017 and one from 2019. The studies \u201cdid not support claims of platform censorship.\u201d The researchers noted, \u201cThe much simpler interpretation of the data is that the online behavior of partisans is not symmetric across the political spectrum.\u201d<\/p>\n\n\n\n<p><strong>Platforms Are Most Likely To Degrade Access for Marginalized Communities<\/strong><\/p>\n\n\n\n<p>There is also a substantial body of research about the discriminatory impact of content moderation on marginalized communities, specifically people of color, LGBTQ+ people, religious minorities, and women. It was informed by a history of research designed to understand the impact of automated decision-making (in real estate, employment, financial services, and the like) on individuals that share characteristics protected by anti-discrimination legislation, including race, gender, and religion. Those systems, designed to profile individuals and make decisions about the allocation of economic opportunities, consistently showed the strong potential for bias in computational systems. In particular, they were shown to reproduce the historical, inequitable outcomes embedded in their data training sets and project them into the future as predictions of future outcomes. (Public Knowledge wrote about how harms from the algorithmic distribution of content are too often concentrated on historically marginalized communities in <a href=\"https:\/\/publicknowledge.org\/should-algorithms-be-regulated-part-2-cataloging-the-harms-of-algorithmic-decision-making\/\">this blog post<\/a>. We have also researched and written extensively about <a href=\"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/\">moderating race on platforms<\/a> and <a href=\"https:\/\/publicknowledge.org\/where-the-rubber-meets-the-road-section-230-and-civil-rights\/\">Section 230 and civil rights<\/a>.)<\/p>\n\n\n\n<p>In her 2018 book, <a href=\"https:\/\/nyupress.org\/9781479837243\/algorithms-of-oppression\/\"><em>Algorithms of Oppression<\/em><\/a>, UCLA professor Safiya Umoja Noble used textual and media searches to show \u201chow negative biases against women of color are embedded in search engine results and algorithms.\u201d She shared the premise that the profit motives of platforms combined with their monopoly status lead to a biased set of search algorithms. In regard to content moderation, research has focused on how the various <a href=\"https:\/\/www.brennancenter.org\/sites\/default\/files\/2021-08\/Double_Standards_Content_Moderation.pdf\">elements of content moderation<\/a> \u2013 the drafting of policies, the methods of enforcement, and the vehicles for redress such as user appeals \u2013 often mean that the voices of marginalized communities are subject to disproportionate moderation while harms targeting them remain unaddressed and the perpetrators protected.<\/p>\n\n\n\n<p>For example, <a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC11420153\/\">a field study of actual posts<\/a> from a popular neighborhood-based social media platform found that when users talk about their experiences as targets of racism, their posts are disproportionately flagged for removal as \u201ctoxic\u201d by five widely used moderation algorithms from major online platforms, including the most recent large language models. In the same study, human users also disproportionately flagged these disclosures for removal. The researchers further demonstrated a chilling effect: simply witnessing these valid posts discussing experiences with racism getting removed made Black Americans feel less welcome online and diminished their sense of community belonging.&nbsp;<\/p>\n\n\n\n<p>Another <a href=\"https:\/\/deepblue.lib.umich.edu\/handle\/2027.42\/169587\">study was specifically designed<\/a> to understand which types of social media users have content and accounts removed more frequently than others, what types of content and accounts are removed, and how content removed may differ between groups. The researchers found that three groups of social media users experienced content and account removals more often than others: political conservatives, transgender people, and Black people. However, the types of content removed from each group varied substantially. Consistent with the studies cited above, conservative participants\u2019 removals often involved harmful content removed according to site guidelines (e.g., posts deemed offensive, COVID-19 claims inconsistent with those of authoritative sources, or hate speech), while transgender and Black participants\u2019 removals often involved content related to expressing their marginalized identities. Despite following site policies or falling into content moderation gray areas, this content was removed.&nbsp;<\/p>\n\n\n\n<p>There are multiple contributors to this double standard. A <a href=\"https:\/\/www.brennancenter.org\/our-work\/research-reports\/double-standards-social-media-content-moderation\">report from the Brennan Center for Justice<\/a> in 2021 highlights 1) how content policies are crafted, 2) bias in automated moderation algorithms, 3) content filters that lack cultural context, and 4) an inability to detect language nuance as the key drivers. Algorithmic systems may attribute the use of words or phrases describing authentic experiences related to gender identity, racism, domestic violence, or mental health to violative behavior on the part of users. Human moderators may also manifest their own bias: whether they lack training, time, or cultural understanding, they make false positive calls on content related to racism and equity more often for some groups. In the study \u2013&nbsp; based on Facebook, Instagram, YouTube, and Twitter \u2013 the researchers found that \u201ccontent moderation at times results in mass takedowns of speech from marginalized groups [communities of color, women, LGBTQ+ communities, and religious minorities], while more dominant individuals and groups benefit from more nuanced approaches like warning labels or temporary demonetization.\u201d The implication is that marginalized voices face extra hurdles to free expression online.<\/p>\n\n\n\n<p>More recently, a group of over 200 researchers signed on to a <a href=\"https:\/\/www.aibiasconsensus.org\/\">letter<\/a> that \u201caffirm[ed] the scientific consensus that artificial intelligence can exacerbate bias and discrimination in society,\u201d noting that \u201cthousands of scientific studies\u201d have shown that AI systems may violate civil and human rights even if their users and creators are well-intentioned.&nbsp;<\/p>\n\n\n\n<p><strong>Summary of Research Conclusions<\/strong><\/p>\n\n\n\n<p>In conclusion, empirical research over the past decade reveals that social media content moderation has not always been neutral in its social or political impact. But it is marginalized voices that often bear a disproportionate burden \u2013 whether through higher rates of wrongful content removal, diminished reach in algorithmic feeds, demonetization, or threats to free expression that come from harassing, hateful, and false information posted by others online. Conversely, disproportionate moderation of conservative, right-wing, or populist content generally results from asymmetric compliance with platform community standards and terms of service.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>While Trump administration officials claim a &#8220;censorship cartel&#8221; is targeting conservatives online, the available data tells a different story.<\/p>\n","protected":false},"author":189,"featured_media":37969,"parent":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[5],"tags":[11,31,14],"class_list":["post-37967","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-insights","tag-content-moderation","tag-free-expression","tag-platform-regulation"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v26.5 (Yoast SEO v26.5) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>What Does Research Tell Us About Technology Platform \u201cCensorship\u201d? - Public Knowledge<\/title>\n<meta name=\"description\" content=\"Though Trump administration officials claim a &quot;censorship cartel&quot; is targeting conservatives online, the available data tells a different story.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"What Does Research Tell Us About Technology Platform \u201cCensorship\u201d?\" \/>\n<meta property=\"og:description\" content=\"Though Trump administration officials claim a &quot;censorship cartel&quot; is targeting conservatives online, the available data tells a different story.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/\" \/>\n<meta property=\"og:site_name\" content=\"Public Knowledge\" \/>\n<meta property=\"article:published_time\" content=\"2025-05-21T15:46:23+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-11-25T06:54:44+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/05\/Free-Expression-and-Content-Moderation.png\" \/>\n\t<meta property=\"og:image:width\" content=\"2000\" \/>\n\t<meta property=\"og:image:height\" content=\"1000\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Lisa Macpherson\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Lisa Macpherson\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"18 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/\"},\"author\":{\"name\":\"Lisa Macpherson\",\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/person\/757e28331d7a5e31a3290be1d16d219b\"},\"headline\":\"What Does Research Tell Us About Technology Platform \u201cCensorship\u201d?\",\"datePublished\":\"2025-05-21T15:46:23+00:00\",\"dateModified\":\"2025-11-25T06:54:44+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/\"},\"wordCount\":4042,\"publisher\":{\"@id\":\"https:\/\/publicknowledge.org\/#organization\"},\"image\":{\"@id\":\"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/05\/Free-Expression-and-Content-Moderation.png\",\"keywords\":[\"Content Moderation\",\"Free Expression\",\"Platform Regulation\"],\"articleSection\":[\"Insights\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/\",\"url\":\"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/\",\"name\":\"What Does Research Tell Us About Technology Platform \u201cCensorship\u201d? - Public Knowledge\",\"isPartOf\":{\"@id\":\"https:\/\/publicknowledge.org\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/05\/Free-Expression-and-Content-Moderation.png\",\"datePublished\":\"2025-05-21T15:46:23+00:00\",\"dateModified\":\"2025-11-25T06:54:44+00:00\",\"description\":\"Though Trump administration officials claim a \\\"censorship cartel\\\" is targeting conservatives online, the available data tells a different story.\",\"breadcrumb\":{\"@id\":\"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/#primaryimage\",\"url\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/05\/Free-Expression-and-Content-Moderation.png\",\"contentUrl\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/05\/Free-Expression-and-Content-Moderation.png\",\"width\":2000,\"height\":1000},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/publicknowledge.org\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"What Does Research Tell Us About Technology Platform \u201cCensorship\u201d?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/publicknowledge.org\/#website\",\"url\":\"https:\/\/publicknowledge.org\/\",\"name\":\"Public Knowledge\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/publicknowledge.org\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/publicknowledge.org\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/publicknowledge.org\/#organization\",\"name\":\"Public Knowledge\",\"url\":\"https:\/\/publicknowledge.org\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png\",\"contentUrl\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png\",\"width\":400,\"height\":200,\"caption\":\"Public Knowledge\"},\"image\":{\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/person\/757e28331d7a5e31a3290be1d16d219b\",\"name\":\"Lisa Macpherson\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/a7ccfea9aef75381949570e9237ff7f0ef0efcd0f80308496086b5b8f6a2989e?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/a7ccfea9aef75381949570e9237ff7f0ef0efcd0f80308496086b5b8f6a2989e?s=96&d=mm&r=g\",\"caption\":\"Lisa Macpherson\"},\"url\":\"https:\/\/publicknowledge.org\/author\/lisa-macpherson\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"What Does Research Tell Us About Technology Platform \u201cCensorship\u201d? - Public Knowledge","description":"Though Trump administration officials claim a \"censorship cartel\" is targeting conservatives online, the available data tells a different story.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/","og_locale":"en_US","og_type":"article","og_title":"What Does Research Tell Us About Technology Platform \u201cCensorship\u201d?","og_description":"Though Trump administration officials claim a \"censorship cartel\" is targeting conservatives online, the available data tells a different story.","og_url":"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/","og_site_name":"Public Knowledge","article_published_time":"2025-05-21T15:46:23+00:00","article_modified_time":"2025-11-25T06:54:44+00:00","og_image":[{"width":2000,"height":1000,"url":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/05\/Free-Expression-and-Content-Moderation.png","type":"image\/png"}],"author":"Lisa Macpherson","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Lisa Macpherson","Est. reading time":"18 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/#article","isPartOf":{"@id":"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/"},"author":{"name":"Lisa Macpherson","@id":"https:\/\/publicknowledge.org\/#\/schema\/person\/757e28331d7a5e31a3290be1d16d219b"},"headline":"What Does Research Tell Us About Technology Platform \u201cCensorship\u201d?","datePublished":"2025-05-21T15:46:23+00:00","dateModified":"2025-11-25T06:54:44+00:00","mainEntityOfPage":{"@id":"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/"},"wordCount":4042,"publisher":{"@id":"https:\/\/publicknowledge.org\/#organization"},"image":{"@id":"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/#primaryimage"},"thumbnailUrl":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/05\/Free-Expression-and-Content-Moderation.png","keywords":["Content Moderation","Free Expression","Platform Regulation"],"articleSection":["Insights"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/","url":"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/","name":"What Does Research Tell Us About Technology Platform \u201cCensorship\u201d? - Public Knowledge","isPartOf":{"@id":"https:\/\/publicknowledge.org\/#website"},"primaryImageOfPage":{"@id":"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/#primaryimage"},"image":{"@id":"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/#primaryimage"},"thumbnailUrl":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/05\/Free-Expression-and-Content-Moderation.png","datePublished":"2025-05-21T15:46:23+00:00","dateModified":"2025-11-25T06:54:44+00:00","description":"Though Trump administration officials claim a \"censorship cartel\" is targeting conservatives online, the available data tells a different story.","breadcrumb":{"@id":"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/#primaryimage","url":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/05\/Free-Expression-and-Content-Moderation.png","contentUrl":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/05\/Free-Expression-and-Content-Moderation.png","width":2000,"height":1000},{"@type":"BreadcrumbList","@id":"https:\/\/publicknowledge.org\/what-does-research-tell-us-about-technology-platform-censorship\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/publicknowledge.org\/"},{"@type":"ListItem","position":2,"name":"What Does Research Tell Us About Technology Platform \u201cCensorship\u201d?"}]},{"@type":"WebSite","@id":"https:\/\/publicknowledge.org\/#website","url":"https:\/\/publicknowledge.org\/","name":"Public Knowledge","description":"","publisher":{"@id":"https:\/\/publicknowledge.org\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/publicknowledge.org\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/publicknowledge.org\/#organization","name":"Public Knowledge","url":"https:\/\/publicknowledge.org\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/publicknowledge.org\/#\/schema\/logo\/image\/","url":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png","contentUrl":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png","width":400,"height":200,"caption":"Public Knowledge"},"image":{"@id":"https:\/\/publicknowledge.org\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/publicknowledge.org\/#\/schema\/person\/757e28331d7a5e31a3290be1d16d219b","name":"Lisa Macpherson","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/publicknowledge.org\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/a7ccfea9aef75381949570e9237ff7f0ef0efcd0f80308496086b5b8f6a2989e?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a7ccfea9aef75381949570e9237ff7f0ef0efcd0f80308496086b5b8f6a2989e?s=96&d=mm&r=g","caption":"Lisa Macpherson"},"url":"https:\/\/publicknowledge.org\/author\/lisa-macpherson\/"}]}},"_links":{"self":[{"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/posts\/37967","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/users\/189"}],"replies":[{"embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/comments?post=37967"}],"version-history":[{"count":0,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/posts\/37967\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/media\/37969"}],"wp:attachment":[{"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/media?parent=37967"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/categories?post=37967"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/tags?post=37967"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}