{"id":18864,"date":"2020-01-29T12:24:19","date_gmt":"2020-01-29T12:24:19","guid":{"rendered":"https:\/\/www.publicknowledge.org\/?p=18864"},"modified":"2021-12-07T21:29:23","modified_gmt":"2021-12-07T21:29:23","slug":"moderating-race-on-platforms","status":"publish","type":"post","link":"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/","title":{"rendered":"Moderating Race on Platforms"},"content":{"rendered":"<p>In the early fall of 2019, Ryan Williams was driving out of a garage with his wife and child when he was allegedly called a racial epithet by his white neighbor and the neighbor\u2019s daughter. When Williams got out of his car, the neighbor called the police, and as the police arrived, Williams, like many people of color, recorded the interaction on his phone. In one of the few descriptions of the incident, <a href=\"https:\/\/www.washingtonpost.com\/local\/before-a-hug-sparked-a-debate-about-race-and-forgiveness-a-maryland-man-decided-to-out-a-neighbor-for-using-the-n-word-despite-an-apology\/2019\/10\/09\/c025ebce-eabb-11e9-9306-47cb0324fd44_story.html\">a <\/a><a href=\"https:\/\/www.washingtonpost.com\/local\/before-a-hug-sparked-a-debate-about-race-and-forgiveness-a-maryland-man-decided-to-out-a-neighbor-for-using-the-n-word-despite-an-apology\/2019\/10\/09\/c025ebce-eabb-11e9-9306-47cb0324fd44_story.html\">perspective<\/a><a href=\"https:\/\/www.washingtonpost.com\/local\/before-a-hug-sparked-a-debate-about-race-and-forgiveness-a-maryland-man-decided-to-out-a-neighbor-for-using-the-n-word-despite-an-apology\/2019\/10\/09\/c025ebce-eabb-11e9-9306-47cb0324fd44_story.html\" target=\"_blank\" rel=\"noopener noreferrer\"> piece in the Washington Post<\/a>, the video seems to show that the neighbor and the neighbor\u2019s daughter acknowledged calling Williams the racial epithet and apologized. The police asked some questions, decided that the incident was not worth any more of their time, and left. <a href=\"https:\/\/www.bet.com\/news\/national\/2019\/09\/25\/white-man-calls-the-cops-after-calling-a-black-man-the-n-word.html\" target=\"_blank\" rel=\"noopener noreferrer\">News<\/a> outlets<a href=\"https:\/\/newsone.com\/3887863\/white-man-n-word-called-cops\/\" target=\"_blank\" rel=\"noopener noreferrer\"> covered the incident<\/a> and, in light of what happened, Williams decided to write down his feelings about what he and his family went through and more broadly on race relations in the U.S. on Medium, a popular blog platform, and post his video on YouTube.<\/p>\n<p>In the Medium post, Williams talked about how the incident made him feel and also called out the neighbor by name for his actions. Currently, neither the Medium post, nor the YouTube video that Williams posted, are online on either platform as both were subsequently taken down by those respective platforms. What is at issue is that Williams found himself at the receiving end of seemingly-neutral content moderation policies that, <a href=\"https:\/\/arxiv.org\/pdf\/1707.01477.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">like other content moderation policies<\/a>, may disproportionately marginalize the voices of people of color. This is not an uncommon scenario. As<a href=\"https:\/\/www.vox.com\/recode\/2019\/8\/15\/20806384\/social-media-hate-speech-bias-black-african-american-facebook-twitter\" target=\"_blank\" rel=\"noopener noreferrer\"> Recode highlighted in August<\/a>, \u201cnatural language processing AI \u2014 which is often proposed as a tool to objectively identify offensive language \u2014 can amplify the same biases that human beings have.\u201d These findings are consistent with <a href=\"https:\/\/www.propublica.org\/article\/facebook-hate-speech-censorship-internal-documents-algorithms\" target=\"_blank\" rel=\"noopener noreferrer\">numerous studies<\/a> and <a href=\"https:\/\/www.theregister.co.uk\/2019\/10\/11\/ai_black_people\/\" target=\"_blank\" rel=\"noopener noreferrer\">articles<\/a> that have <a href=\"https:\/\/observer.com\/2019\/08\/google-ai-hate-speech-detector-black-racial-bias-twitter-study\/\" target=\"_blank\" rel=\"noopener noreferrer\">found<\/a> that <a href=\"https:\/\/arxiv.org\/pdf\/1905.12516.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">black speech is more heavily criticized<\/a> than its <a href=\"https:\/\/www.usatoday.com\/story\/news\/2019\/04\/24\/facebook-while-black-zucked-users-say-they-get-blocked-racism-discussion\/2859593002\/\" target=\"_blank\" rel=\"noopener noreferrer\">white counterparts<\/a>.<\/p>\n<p>This is one example of how the way that platforms enact their content moderation policies is having a disparate impact on communities of color. \u201cDisparate impact,\u201d a term the Supreme Court used in <a href=\"https:\/\/scholar.google.com\/scholar_case?case=8655598674229196978&amp;q=Griggs+v.+Duke+Power+Co.,+401+U.S.+424+(1977)&amp;hl=en&amp;as_sdt=20006\" target=\"_blank\" rel=\"noopener noreferrer\">Griggs v. Duke Power <\/a><a href=\"https:\/\/scholar.google.com\/scholar_case?case=8655598674229196978&amp;q=Griggs+v.+Duke+Power+Co.,+401+U.S.+424+(1977)&amp;hl=en&amp;as_sdt=20006\">Co.<\/a>, is a doctrine where a facially neutral policy that has a disproportionate or statistically significant impact on a protected class of persons is considered discriminatory. As Black Lives Matter or the Arab Spring showed the world, what is shared on social media can affect real-world change and how content is edited, whether by AI or by a human, can have a profound impact on the collective discourse. Williams\u2019 post is an example of how racial justice, Section 230 of the Communications Decency Act, and content moderation decisions made by companies will now be intertwined for the foreseeable future.<\/p>\n<p><strong>Section 230 and Content Moderation<\/strong><\/p>\n<p>Section 230 of the Communications Decency Act allows platforms to moderate third-party content. (For a full history of Section 230, how the law works, and what the law really means, see our<a href=\"https:\/\/www.publicknowledge.org\/tag\/section-230-series\/\" target=\"_blank\" rel=\"noopener noreferrer\"> Section 230 blog series<\/a>.) This is fundamentally a good thing. In an ideal world where there are multiple platforms vying for our screen time, superior content moderation policies and superior enforcement of those policies could be considered a market advantage. Different policies for different platforms could, if there were true competition in platforms, highlight a diverse set of voices much like magazines or newspapers highlight different voices in their content (the Wall Street Journal versus the <a href=\"https:\/\/www.afro.com\" target=\"_blank\" rel=\"noopener noreferrer\">Afro<\/a>, for instance). However, because of the consolidation of platforms, there is limited competition, which makes the content moderation policies of large platforms all the more important. As highlighted by what happened to Williams, content moderation policies have a powerful impact on the conversations had over the internet.<\/p>\n<p>People on both ends of the political spectrum complain about bias by tech companies in content moderation. However, <a href=\"https:\/\/homes.cs.washington.edu\/~msap\/pdfs\/sap2019risk.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">a study found<\/a> that black people were one and a half times more likely to have their content flagged on Twitter than their white counterparts and the content was more than twice as likely to be flagged if it was written in African American Vernacular (AAV). This makes the fact that Williams\u2019 post was taken down on Medium all the more interesting. To be clear, the arguments made here are not about the decision to take down the piece &#8212; they\u2019re about the underlying policy that informs platform content moderation decisions, the decisions the platforms make, and the ripple effects they have. (This issue was also highlighted in John Bergmayer\u2019s paper on <a href=\"https:\/\/www.publicknowledge.org\/assets\/uploads\/blog\/Even_Under_Kind_Masters.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">dominant platforms\u2019 responsibility to mandate due process protections to users<\/a>.) As a point of juxtaposition, I will compare the content moderation policies of Medium to another company that has had content moderation issues in the past: Airbnb.<\/p>\n<p><strong>Medium<\/strong><strong>&nbsp;<\/strong><\/p>\n<p><a href=\"https:\/\/medium.com\/about\" target=\"_blank\" rel=\"noopener noreferrer\">Medium<\/a> is a platform dedicated to bringing together writers, thinkers, and storytellers to \u201cbring you the smartest takes on topics that matter.\u201d Individuals can become members, write stories to be published on the platform, and promote content to readers who are interested in that specific topic area. These articles can be about almost anything, from insightful think pieces, to articles about historical events, to personal stories that people want to share with the world. The platform has a page of <a href=\"https:\/\/medium.com\/policy\/medium-rules-30e5502c4eb4\" target=\"_blank\" rel=\"noopener noreferrer\">content rules<\/a> that outline what can and cannot be posted about on their website. It states what happens if someone is found to have broken the rules of the platform: That user\u2019s post is removed and the user\u2019s account is suspended without notice until Medium can determine if it is in fact breaking the rules. If someone wants to appeal the takedown or deletion of their account, Medium provides an email address and says they will \u201cconsider all good faith efforts to appeal\u201d with no further redress articulated.<\/p>\n<p>While the list is not exhaustive, Medium states that, \u201c[i]n deciding whether someone has violated the rules, we will take into account things like newsworthiness, the context and nature of the posted information, the likelihood and severity of actual or potential harms, and applicable laws.\u201d Neither the calculus of the decisions that Medium makes in its content policy nor the appeals process are transparent or straightforward. Medium says that it \u201cwill consider all good faith efforts to appeal,\u201d without an elaboration on what those good faith efforts might include or what that \u201cconsideration\u201d will entail. In deciding whether a post violates the rules, Medium says that it will look at \u201ccontext, newsworthiness, and nature of the posted information and applicable privacy laws.\u201d Without the article in question, it is hard to determine which of the 17 different rules Willams broke on Medium. However, based on a Washington Post article, we can extrapolate which rules Williams&#8217; post likely broke &#8212; the ones against violating another user&#8217;s privacy, jeopardizing another user&#8217;s reputation, or &#8212; ultimately &#8212; against harassing another user.<\/p>\n<p>Medium rules state that the platform does not allow \u201cdoxing, which includes not only private or obscure personal information but also the aggregation of publicly available information to target, <strong>shame<\/strong> [emphasis added], blackmail, harass, intimidate, threaten, or endanger.\u201d Medium also does not allow harassment, which includes \u201cbullying, threatening, or <strong>shaming someone <\/strong>[emphasis added], or posting things likely to encourage others to do so.\u201d One might argue that when Williams posted the name of the man who called him a racial epithet and posted the video of his interaction with him in front of the police, he was doxing or harassing him by shaming him publicly. While there is an argument to be made that a momentary slip of judgement should not be made permanent by the internet, <a href=\"https:\/\/www.thenation.com\/article\/the-social-shaming-of-racists-is-working\/\" target=\"_blank\" rel=\"noopener noreferrer\">there is an argument that the public shaming of prejudice is actually effective<\/a>. Especially because one could argue using Medium\u2019s own rules, the post could have very easily stayed up. Moreover, if one focuses on the newsworthiness of Williams\u2019 post, multiple news outlets mentioned the neighbor by name before Williams\u2019 post was taken down by Medium, all of which are still up at the time of this post\u2019s publication.<\/p>\n<p>While under Section 230, Medium is free to make these kinds of editorial decisions on third-party content, it is the arbitrary nature of these decisions that is troublesome. Medium\u2019s rules are neither clear nor articulable, and, as highlighted here, the policy does not outline clear or transparent processes for how content decisions are made. There are very <a href=\"https:\/\/www.publicknowledge.org\/blog\/due-process-and-our-approach-to-dominant-online-platforms\/\" target=\"_blank\" rel=\"noopener noreferrer\">limited appeals processes<\/a> and the vagueness of the \u201cgood faith effort\u201d provision gives both the moderators and the content creators little to go off of when making content decisions. While different platforms having different standards and policies is healthy, <a href=\"https:\/\/www.youtube.com\/watch?v=c71nqMqmiaw\" target=\"_blank\" rel=\"noopener noreferrer\">knowing is half the battle<\/a> and making sure users know what kind of content is being taken down and why should be a standard practice for platforms.<\/p>\n<p><strong>Airbnb<\/strong><\/p>\n<p>As a point of comparison, another platform, home-sharing service Airbnb, has made its mission to change its content moderation policies to make the company\u2019s services as safe and welcoming as they can. Airbnb is an online marketplace for arranging or offering lodging &#8212; primarily homestays &#8212; and tourism experiences internationally. Airbnb has had its own content moderation issues such as the tenor of reviews being left by users, misleading descriptions of listings, and discrimination in its internal messaging system by hosts &#8212; not to mention basic issues of user safety. In 2019, the company released a <a href=\"https:\/\/news.airbnb.com\/an-update-on-airbnbs-work-to-fight-discrimination\/\" target=\"_blank\" rel=\"noopener noreferrer\">three-year report<\/a> that outlined the ways in which Airbnb had diminished and continues to fight discrimination on its platform in various forms and fashions. Due to its dilligence in fighting discrimination on its platform, Airbnb received <a href=\"https:\/\/cleaver.house.gov\/sites\/cleaver.house.gov\/files\/16.06.2016%20Airbnb%20Letter.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">commendations<\/a> from Congressional Black Caucus members and civil rights organizations. Overall, Airbnb has been successful in instituting a number of practices on its platform to mitigate bias, even if it has a lot of work ahead of it to address other issues on the platform.<\/p>\n<p>Airbnb\u2019s successful policy changes were how it dealt with its own content moderation. Its changes included <a href=\"https:\/\/www.airbnb.com\/help\/article\/1405\/airbnbs-nondiscrimination-policy-our-commitment-to-inclusion-and-respect\" target=\"_blank\" rel=\"noopener noreferrer\">an anti-discrimination policy<\/a> that is more detailed and robust than its more general content policy standards. Airbnb\u2019s policies also transparently lay out the process it uses to determine which posts are potentially discriminatory, and sets that as the standard for its content moderation and takedown policy. Airbnb\u2019s content moderation rules are in line with the Fair Housing Act\u2019s anti-discrimination provisions. By focusing on what the real issue is with user content\u2014the potential for users to feel unsafe or discriminated against unfairly\u2014Airbnb was able to change the way users interact with the platform.<\/p>\n<p>Airbnb saw the potential for discrimination and bad actors on its platform and gave users a multi-page and clear-cut policy explaining what will be taken down and why the site is going to monitor and moderate the content that is posted. They have a few policy pages, but it did not take me long to find the answers I needed to know about why something might be removed from the website. Airbnb also has a clear appeals process and articulable standards whereby users can understand why their content has been taken down or flagged. These clear processes make it easier for both the platform and the user to understand why and how content is moderated. It enables the user to know what is expected of him or her when using the platform and specific grounds for appeal if they think the platform made the wrong decision. Clear rules in content moderation make it easier for the moderators to know what to take down, which provides clarity for content moderators and platforms when they are challenged about content or moderation practices.<\/p>\n<p><strong>Underlying Issues<\/strong><\/p>\n<p>Internet users may not often think about the content moderation policies for the platforms they use every day, and even less so about the ways in which those policies impact the lived experiences of other users. How content is moderated impacts what stories are told and who has the agency to tell them. Furthermore, there is currently a media environment where the voices of marginalized communities are consistently considered afterthoughts. When marginalized communities do find their ideas and stories told, they are often through \u201cmainstream\u201d publications, reporters, or authors who rarely share their perspectives or lived experience. This makes the content policies enacted by platforms so critical. Ultimately these platforms have the ability to amplify marginalized communities and their voices, and allow them to feel heard, respected, and treated with dignity. Platforms like Medium find themselves at the forefront of that debate with articles like the one authored by Williams.<\/p>\n<p>This is not to say that Medium is intentionally or unintentionally biased in its content moderation &#8212; Medium has an array of articles and posts from people of nearly every gender, race, religious group, and background. However, it is interesting that a person of color\u2019s experience with prejudice is taken down while <a href=\"https:\/\/medium.com\/@maxmarmer\/a-proposal-for-how-to-tolerate-white-nationalism-cc75bc899bf0\" target=\"_blank\" rel=\"noopener noreferrer\">a proposal as to how to tolerate white nationalism<\/a> is kept up. As is true in a variety of contexts, the more opaque the rules, the more they have a disparate impact on people of color. And, as was highlighted in a <a href=\"https:\/\/civic.mit.edu\/2019\/01\/24\/how-automated-tools-discriminate-against-black-language\/\" target=\"_blank\" rel=\"noopener noreferrer\">report by Civic Media<\/a>, the way that some communities of color communicate has been flagged by \u201cneutral\u201d algorithms that target \u201ctoxic\u201d language.<\/p>\n<p>Platforms carry a lot of responsibility &#8212; they become judge, jury, and executioner on speech in ways that make people feel anxious and that someone will inevitably disagree with. We need to look at the way that content is moderated, what best practices should be, and what accountability looks like. Williams\u2019 article may have had some valid points, and Medium may have had some valid reasons for not allowing the article to remain on its platform, but it benefits the platform, the user, and the internet ecosystem as a whole to have a clear understanding of why the decisions that were made were made. I, for one, would have loved to read it.<em>&nbsp;<\/em><\/p>\n<p><em>I would like to thank <\/em><a href=\"https:\/\/www.linkedin.com\/in\/adonne-washington-600638a3\/\" target=\"_blank\" rel=\"noopener noreferrer\"><em>Adonne Washington<\/em><\/a><em> for her brilliance and help in developing this post.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the early fall of 2019, Ryan Williams was driving out of a garage with his wife and child when he was allegedly called a racial epithet by his white neighbor and the neighbor\u2019s daughter. When Williams got out of his car, the neighbor called the police, and as the police arrived, Williams, like many [&hellip;]<\/p>\n","protected":false},"author":183,"featured_media":0,"parent":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[5],"tags":[11,14],"class_list":["post-18864","post","type-post","status-publish","format-standard","hentry","category-insights","tag-content-moderation","tag-platform-regulation"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v26.5 (Yoast SEO v26.5) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Moderating Race on Platforms - Public Knowledge<\/title>\n<meta name=\"description\" content=\"This is one example of how the way that platforms enact their content moderation policies is having a disparate impact on communities of\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Moderating Race on Platforms\" \/>\n<meta property=\"og:description\" content=\"This is one example of how the way that platforms enact their content moderation policies is having a disparate impact on communities of\" \/>\n<meta property=\"og:url\" content=\"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/\" \/>\n<meta property=\"og:site_name\" content=\"Public Knowledge\" \/>\n<meta property=\"article:published_time\" content=\"2020-01-29T12:24:19+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2021-12-07T21:29:23+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png\" \/>\n\t<meta property=\"og:image:width\" content=\"400\" \/>\n\t<meta property=\"og:image:height\" content=\"200\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Bertram Lee\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Bertram Lee\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"11 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/\"},\"author\":{\"name\":\"Bertram Lee\",\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/person\/400be2fa7bcfdc09f9e369f69e1d6b6f\"},\"headline\":\"Moderating Race on Platforms\",\"datePublished\":\"2020-01-29T12:24:19+00:00\",\"dateModified\":\"2021-12-07T21:29:23+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/\"},\"wordCount\":2320,\"publisher\":{\"@id\":\"https:\/\/publicknowledge.org\/#organization\"},\"keywords\":[\"Content Moderation\",\"Platform Regulation\"],\"articleSection\":[\"Insights\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/\",\"url\":\"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/\",\"name\":\"Moderating Race on Platforms - Public Knowledge\",\"isPartOf\":{\"@id\":\"https:\/\/publicknowledge.org\/#website\"},\"datePublished\":\"2020-01-29T12:24:19+00:00\",\"dateModified\":\"2021-12-07T21:29:23+00:00\",\"description\":\"This is one example of how the way that platforms enact their content moderation policies is having a disparate impact on communities of\",\"breadcrumb\":{\"@id\":\"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/publicknowledge.org\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Moderating Race on Platforms\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/publicknowledge.org\/#website\",\"url\":\"https:\/\/publicknowledge.org\/\",\"name\":\"Public Knowledge\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/publicknowledge.org\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/publicknowledge.org\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/publicknowledge.org\/#organization\",\"name\":\"Public Knowledge\",\"url\":\"https:\/\/publicknowledge.org\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png\",\"contentUrl\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png\",\"width\":400,\"height\":200,\"caption\":\"Public Knowledge\"},\"image\":{\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/person\/400be2fa7bcfdc09f9e369f69e1d6b6f\",\"name\":\"Bertram Lee\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/1f22b08c133a12a60051c675009b18ccca0433b6abe027879734aea6ecec7c4b?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/1f22b08c133a12a60051c675009b18ccca0433b6abe027879734aea6ecec7c4b?s=96&d=mm&r=g\",\"caption\":\"Bertram Lee\"},\"url\":\"https:\/\/publicknowledge.org\/author\/betram-lee\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Moderating Race on Platforms - Public Knowledge","description":"This is one example of how the way that platforms enact their content moderation policies is having a disparate impact on communities of","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/","og_locale":"en_US","og_type":"article","og_title":"Moderating Race on Platforms","og_description":"This is one example of how the way that platforms enact their content moderation policies is having a disparate impact on communities of","og_url":"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/","og_site_name":"Public Knowledge","article_published_time":"2020-01-29T12:24:19+00:00","article_modified_time":"2021-12-07T21:29:23+00:00","og_image":[{"width":400,"height":200,"url":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png","type":"image\/png"}],"author":"Bertram Lee","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Bertram Lee","Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/#article","isPartOf":{"@id":"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/"},"author":{"name":"Bertram Lee","@id":"https:\/\/publicknowledge.org\/#\/schema\/person\/400be2fa7bcfdc09f9e369f69e1d6b6f"},"headline":"Moderating Race on Platforms","datePublished":"2020-01-29T12:24:19+00:00","dateModified":"2021-12-07T21:29:23+00:00","mainEntityOfPage":{"@id":"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/"},"wordCount":2320,"publisher":{"@id":"https:\/\/publicknowledge.org\/#organization"},"keywords":["Content Moderation","Platform Regulation"],"articleSection":["Insights"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/","url":"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/","name":"Moderating Race on Platforms - Public Knowledge","isPartOf":{"@id":"https:\/\/publicknowledge.org\/#website"},"datePublished":"2020-01-29T12:24:19+00:00","dateModified":"2021-12-07T21:29:23+00:00","description":"This is one example of how the way that platforms enact their content moderation policies is having a disparate impact on communities of","breadcrumb":{"@id":"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/publicknowledge.org\/moderating-race-on-platforms\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/publicknowledge.org\/moderating-race-on-platforms\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/publicknowledge.org\/"},{"@type":"ListItem","position":2,"name":"Moderating Race on Platforms"}]},{"@type":"WebSite","@id":"https:\/\/publicknowledge.org\/#website","url":"https:\/\/publicknowledge.org\/","name":"Public Knowledge","description":"","publisher":{"@id":"https:\/\/publicknowledge.org\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/publicknowledge.org\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/publicknowledge.org\/#organization","name":"Public Knowledge","url":"https:\/\/publicknowledge.org\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/publicknowledge.org\/#\/schema\/logo\/image\/","url":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png","contentUrl":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png","width":400,"height":200,"caption":"Public Knowledge"},"image":{"@id":"https:\/\/publicknowledge.org\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/publicknowledge.org\/#\/schema\/person\/400be2fa7bcfdc09f9e369f69e1d6b6f","name":"Bertram Lee","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/publicknowledge.org\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/1f22b08c133a12a60051c675009b18ccca0433b6abe027879734aea6ecec7c4b?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/1f22b08c133a12a60051c675009b18ccca0433b6abe027879734aea6ecec7c4b?s=96&d=mm&r=g","caption":"Bertram Lee"},"url":"https:\/\/publicknowledge.org\/author\/betram-lee\/"}]}},"_links":{"self":[{"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/posts\/18864","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/users\/183"}],"replies":[{"embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/comments?post=18864"}],"version-history":[{"count":0,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/posts\/18864\/revisions"}],"wp:attachment":[{"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/media?parent=18864"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/categories?post=18864"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/tags?post=18864"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}