{"id":36063,"date":"2023-08-07T17:25:04","date_gmt":"2023-08-07T17:25:04","guid":{"rendered":"https:\/\/publicknowledge.org\/?p=36063"},"modified":"2025-01-16T16:12:35","modified_gmt":"2025-01-16T16:12:35","slug":"lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it","status":"publish","type":"post","link":"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/","title":{"rendered":"Lies, Damn Lies, and Generative Artificial Intelligence: How GAI Automates Disinformation and What We Should Do About It"},"content":{"rendered":"\n<p>Generative artificial intelligence (AI) has exploded into popular consciousness since the release of ChatGPT to the general public for testing in November 2022. The term refers to machine learning systems that can be used to create new content in response to human prompts after being trained on vast amounts of data. Outputs of generative artificial intelligence may include audio (e.g., Amazon Polly and Murf.AI), code (e.g., CoPilot), images (e.g., Stable Diffusion, Midjourney, and Dall-E), text (e.g. ChatGPT, Llama), and videos (e.g., Synthesia). As has been the case for many advances in science and technology, we\u2019re <a href=\"https:\/\/www.politico.com\/newsletters\/digital-future-daily\/2023\/05\/15\/inside-the-ai-culture-war-00096963\">hearing from all sides<\/a> about the short- and long-term risks \u2013 as well as the societal and economic benefits \u2013 of these capabilities.<\/p>\n\n\n\n<p>In this post, we\u2019ll discuss the specific risk that broad use of generative artificial intelligence systems will further distort the integrity of our news environment through the creation and spread of false information. We\u2019ll also discuss a range of solutions that have been proposed to protect the integrity of our information environment.&nbsp;<\/p>\n\n\n\n<p><strong>Highlighting the Risks of Generative AI for Disinformation<\/strong><\/p>\n\n\n\n<p>Generative artificial intelligence systems can compound the existing challenges in our information environment in at least three ways: by increasing the number of parties that can create credible disinformation narratives; making them less expensive to create; and making them more difficult to detect. If social media made it cheaper and easier to <em>spread <\/em>disinformation, now generative AI will make it easier to <em>produce<\/em>. And traditional cues that alert researchers to false information, like language and syntax issues and cultural gaffes in foreign intelligence operations, will be missing.&nbsp;<\/p>\n\n\n\n<p>ChatGPT, the consumer-facing application of generative pre-trained transformer GPT, has already been <a href=\"https:\/\/www.nytimes.com\/2023\/02\/08\/technology\/ai-chatbots-disinformation.html\">described as<\/a> \u201cthe most powerful tool for spreading misinformation that has ever been on the internet.\u201d Researchers at OpenAI, ChatGPT\u2019s parent company, have <a href=\"https:\/\/arxiv.org\/pdf\/1908.09203.pdf\">conveyed their own concerns<\/a> that their systems could be misused by \u201cmalicious actors\u2026 motivated by the pursuit of monetary gain, a particular political agenda, and\/or a desire to create chaos or confusion.\u201d Image generators, like Stability AI\u2019s Stable Diffusion, create such realistic images that they may undermine the classic entreaty to \u201cbelieve your own eyes\u201d in order to determine what is true and what is not.&nbsp;<\/p>\n\n\n\n<p>This isn\u2019t just about \u201challucinations,\u201d which refers to when a generative model puts out factually incorrect or nonsensical information. Researchers have already proven that bad actors can use machine-generated <a href=\"https:\/\/www.npr.org\/2023\/06\/29\/1183684732\/ai-generated-text-is-hard-to-spot-it-could-play-a-big-role-in-the-2024-campaign\">propaganda to sway opinions<\/a>. The impact of generative models on our information environment can be cumulative: <a href=\"https:\/\/arxiv.org\/pdf\/2305.17493v2.pdf\">Researchers are finding<\/a> that the use of content from large language models to train other models pollutes the information environment and results in content that is further and further from reality. It all adds a scary new twist to the <a href=\"https:\/\/twitter.com\/tveastman\/status\/1069674780826071040?lang=en\">classic description of the internet<\/a> as \u201cfive websites, each consisting of screenshots of text from the other four.\u201d What if all those websites were actually training each other on false information, then feeding it to us?<\/p>\n\n\n\n<p>These risks have already created momentum among policymakers to regulate generative AI. The Federal Trade Commission <a href=\"https:\/\/www.washingtonpost.com\/technology\/2023\/07\/13\/ftc-openai-chatgpt-sam-altman-lina-khan\/?utm_source=substack&amp;utm_medium=email\">recently demanded<\/a> that OpenAI provide detailed descriptions of all complaints it has received of its products making \u201cfalse, misleading, disparaging or harmful\u201d statements about people. The <a href=\"https:\/\/www.whitehouse.gov\/pcast\/briefing-room\/2023\/05\/13\/pcast-working-group-on-generative-ai-invites-public-input\/\">White House<\/a>, <a href=\"https:\/\/judiciary.house.gov\/committee-activity\/hearings\/artificial-intelligence-and-intellectual-property-part-i\">House<\/a>, and <a href=\"https:\/\/www.judiciary.senate.gov\/committee-activity\/hearings\/oversight-of-ai-rules-for-artificial-intelligence\">Senate<\/a> are holding hearings or calling for comments about the risks of generative AI in order to steer potential policy interventions. Legislators have called for content authenticity standards; notifications to users when generative AI is used to create content; impact and risk assessments; and certification of \u201chigh-impact\u201d AI systems. And \u2013 inevitably \u2013 we\u2019ve already heard \u201cgenerative AI\u201d and \u201cSection 230\u201d <a href=\"https:\/\/www.hawley.senate.gov\/sites\/default\/files\/2023-06\/Hawley-No-Section-230-Immunity-for-AI-Act.pdf\">used together in a sentence.<\/a> (<a href=\"https:\/\/publicknowledge.org\/sorry-sydney\/\">Our position<\/a> is that the large language models associated with generative AI do not enjoy Section 230 protections.)<\/p>\n\n\n\n<p>So what should we do? It\u2019s already clear that a range of solutions will be both desirable and necessary in order to protect the integrity of our information environment and help restore trust in institutions \u2013 but, spoiler alert \u2013 few of them pertain specifically to disinformation generated by AI.&nbsp;<\/p>\n\n\n\n<p><strong>Technical Solutions<\/strong><\/p>\n\n\n\n<p>The explosion of focus on generative AI has ignited a parallel explosion in technological solutions to track \u201cdigital provenance\u201d and ensure \u201ccontent authenticity\u201d \u2013 that is, tools to help detect what content is created with AI. These tools, some of which come from the creators of AI systems, can be applied in different places on the value chain. For example, <a href=\"https:\/\/www.nytimes.com\/2023\/05\/18\/technology\/ai-chat-gpt-detection-tools.html\">Adobe\u2019s Firefly<\/a> generative technology, which will be integrated into Google\u2019s Bard chatbot, attaches \u201cnutrition labels\u201d to the content it produces, including the date an image was made and the digital tools used to create it. <a href=\"https:\/\/c2pa.org\/\">The Coalition for Content Provenance and Authenticity<\/a>, a consortium of major technology, media, and consumer products companies, has launched an interoperable verification standard for certifying the source and history (that is, provenance) of media content. Various systems for so-called \u201cdigital watermarking\u201d \u2013 modifications of generated text or media in ways that are invisible to people but can be detected by AI using cryptographic techniques \u2013 have also been proposed. Several companies, including <a href=\"https:\/\/scontent-sjc3-1.xx.fbcdn.net\/v\/t39.8562-6\/361643215_1004219997281331_6332933766797859993_n.pdf?_nc_cat=111&amp;ccb=1-7&amp;_nc_sid=ae5e01&amp;_nc_ohc=4bD2ixIWrlEAX9z_dvZ&amp;_nc_ht=scontent-sjc3-1.xx&amp;oh=00_AfBBl2AGeJzsQOpLEjO849AHIZ1P1MY68TTXBwcMwrybtQ&amp;oe=64BB0C07\">Meta for its new Llama 2 product<\/a>, encourage the use of classifiers that detect and filter outputs based on the meaning conveyed by the words chosen. An alternative technical approach to detect inauthentic content that can be used downstream is the use of digital forensics tactics, like tracking the network or device address or conducting reverse image searches for content that has already been posted and shared.&nbsp;<\/p>\n\n\n\n<p>While each of these solutions has its own <a href=\"https:\/\/knightcolumbia.org\/content\/how-to-prepare-for-the-deluge-of-generative-ai-on-social-media\">strengths and weaknesses<\/a>, even in aggregate, they are imperfect and may be outpaced by developments in the technology itself. Early tools, like OpenAI\u2019s own classifier, have already been <a href=\"https:\/\/techcrunch.com\/2023\/07\/25\/openai-scuttles-ai-written-text-detector-over-low-rate-of-accuracy\/\">retired because of their low rate<\/a> of accuracy. Opt-in standards won\u2019t be adopted by bad actors; in fact, bad actors may <a href=\"https:\/\/www.nytimes.com\/interactive\/2023\/06\/28\/technology\/ai-detection-midjourney-stable-diffusion-dalle.html?smid=nytcore-ios-share&amp;referringSource=articleShare\">copy, resave, shrink, or crop images<\/a>, which obscures the signals that AI detectors rely on. Bad actors may also <a href=\"https:\/\/arxiv.org\/pdf\/2308.00879.pdf\">favor earlier, more basic versions<\/a> of generative AI systems that lack the protections of new versions. Like the content moderation systems of the dominant platforms, most of the \u201cdetectors\u201d currently <a href=\"https:\/\/www.fastcompany.com\/90929549\/google-jigsaw-toxic-speech-ai\">struggle<\/a> with writing that is not in English, and can sustain or amplify moderation bias against marginalized groups. In another parallel to content moderation, development of classifier systems can take <a href=\"https:\/\/www.wsj.com\/articles\/chatgpt-openai-content-abusive-sexually-explicit-harassment-kenya-workers-on-human-workers-cf191483\">a heavy toll on human workers<\/a>. In short, it is unlikely these tools would win a technological arms race with motivated generators of disinformation. And some of these methods raise concerns that they may encourage platforms to detect and moderate certain forms of content too aggressively, threatening free expression.&nbsp;<\/p>\n\n\n\n<p><strong>Content Moderation Solutions<\/strong><\/p>\n\n\n\n<p>Another range of solutions has to do with how downstream companies, such as search engines and social media platforms, moderate content created by generative AI. Most of their approaches are really extensions of their existing strategies to mitigate disinformation. These include using fact-checking partnerships to verify the veracity of content; labeling of problematic content as a means of adding friction to sharing; downranking content from repeat offenders; upranking trusted sources of information; and fingerprinting and sharing of known AI-created content across platforms (similar to processes that already exist for fingerprinting non-consensual intimate images and child sexual abuse materials). In their efforts to avoid partisan debates about censorship and bias, several of the major platforms have also shifted their emphasis from the content of posts to account and behavioral signals, like detecting networks of accounts that amplify each other&#8217;s messages, large groups of accounts that are created at the same time, and hashtag flooding.&nbsp;<\/p>\n\n\n\n<p>All of these methods may be helpful if lower cost, higher volume and more difficult detection are the hallmarks of generative AI in disinformation. They may also use risk assessments to determine where the potential harms are severe enough to warrant specific policies related to AI-generated content. (Elections and public health information are the most prevalent examples. When the stakes are that high, it may warrant prohibitions on certain uses of generative AI or manipulated media.) They could add information about AI-generated content (such as its prevalence, or the type moderated) to existing transparency reports. We would also favor policies that call for more accountability, including legal liability, for paid advertising. We don\u2019t have the same concerns about over-moderation of commercial speech.&nbsp;<\/p>\n\n\n\n<p>But all these methods carry the same limits and risks as they do for other forms of content. That includes the risk of over-moderation, which invariably has a particular impact on marginalized communities. As generative AI comes into broader use, users may actually be posting content that is beneficial and entertaining, making strict moderation policies by search and social media platforms undesirable as well as legally problematic. Even when strict policies and enforcement are warranted, their value depends on platforms\u2019 willingness and ability to enforce them, including in languages other than English. Do we really want platforms to be the main line of defense against harmful narratives of disinformation given the platforms\u2019 history, including on topics of enormous public importance like <a href=\"https:\/\/www.misinfotrackingreport.com\/\">COVID-19<\/a> and <a href=\"https:\/\/publicknowledge.org\/election-disinformation-2022-the-battlefield-shifts-again\/\">elections<\/a>?&nbsp;<\/p>\n\n\n\n<p><strong>AI Industry Self Regulation<\/strong>&nbsp;<\/p>\n\n\n\n<p>Until or unless there are government regulations, the field of AI will be governed largely by the ethical frameworks, codes, and practices of its developers and users. (There are exceptions, such as when AI systems have outcomes that are discriminatory.) Virtually every AI developer has articulated their own principles for responsible AI development. These principles may encompass each stage of the product development process, from pretraining and training of data sets to setting boundaries for outputs, and incorporate principles like privacy and security, equity and inclusion, and transparency. They also articulate use policies that ostensibly govern what users can generate. For example, OpenAI\u2019s <a href=\"https:\/\/openai.com\/policies\/usage-policies\">usage policies<\/a> \u201cdisallow\u201d disinformation, as well as hateful, harassing, or violent content and coordinated inauthentic behavior, among other things.&nbsp;<\/p>\n\n\n\n<p>But these policies, no matter how well-intentioned, have significant limits. For example, <a href=\"https:\/\/www.nytimes.com\/2023\/07\/27\/business\/ai-chatgpt-safety-research.html\">researchers recently found<\/a> that the \u201cguardrails\u201d of both closed systems, like ChatGPT, and open-sourced systems, like <a href=\"https:\/\/ai.meta.com\/llama\/use-policy\/\">Meta\u2019s Llama 2 product<\/a>, can be \u201ccoaxed\u201d into generating biased, false, and violative responses. And, as in<em> every other industry<\/em>, voluntary standards and self-regulation are subject to daily trade-offs with growth and profit motives. This will be the case even when voluntary standards are <a href=\"https:\/\/www.whitehouse.gov\/wp-content\/uploads\/2023\/07\/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf\">agreed to collectively<\/a> (as is a new industry-led body to develop safety standards) or <a href=\"https:\/\/www.whitehouse.gov\/briefing-room\/statements-releases\/2023\/07\/21\/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai\/\">secured by the White House<\/a> (as is a new set of commitments announced last week). For the most part, we\u2019re talking about the same companies \u2013 even some of the same people \u2013 whose voluntary standards have proven insufficient to safeguard our privacy, moderate content that threatens democracy, ensure equitable outcomes, and prohibit harassment and hate speech.&nbsp;<\/p>\n\n\n\n<p><strong>Regulatory Solutions<\/strong><\/p>\n\n\n\n<p>Any discussion of how to regulate disinformation in the United States \u2013 no matter how virulent, and no matter how it\u2019s created \u2013 is bounded by the simple fact that most of it is constitutionally protected speech. Regardless, policymakers are actively exploring whether, or how, to regulate generative (and other) AI. <a href=\"https:\/\/www.pewresearch.org\/short-reads\/2023\/07\/20\/most-americans-favor-restrictions-on-false-information-violent-content-online\/\">New research<\/a> shows public support for the federal government taking steps to restrict false information and extremely violent content online. In Public Knowledge\u2019s view: Proceed with caution. While there may be room and <a href=\"https:\/\/publicknowledge.org\/policy\/go-local-combating-misinformation-through-media-policy-promoting-localism-and-diversity\/\">precedent<\/a> for content standards for the most destructive \u201clawful but awful\u201d disinformation (such as networked disinformation that threatens national security and public health and safety), in general user speech is protected speech and free expression values are paramount.&nbsp;<\/p>\n\n\n\n<p>One framework \u2013 which begins by comparing AI to nuclear weapons \u2013 is grounded in the idea of <a href=\"https:\/\/www.vox.com\/future-perfect\/2023\/7\/3\/23779794\/artificial-intelligence-regulation-ai-risk-congress-sam-altman-chatgpt-openai\">incremental regulation<\/a>; that is, regulation that recognizes and accounts for a breadth of use cases and potential benefits as well as harms. It encourages us to focus on applications of the technology, not bans or restrictions on the technology itself. Every sector and use case comes with its own set of ethical dilemmas, technical complexities, stakeholders and policy challenges, and potential transformational benefits from AI. For example, in the case of disinformation, Public Knowledge <a href=\"https:\/\/publicknowledge.org\/a-superfund-for-the-internet-could-clean-up-our-polluted-information-ecosystem\/\">advocates for solutions<\/a> that address the harms associated with disinformation whether they originate with generative AI, Photoshop, troll farms, or your uncle Frank. The resulting policy solutions would encompass things like requirements for risk assessment frameworks and mitigation strategies; transparency on algorithmic decision-making and its outcomes; access to data for qualified researchers; guarantee of <a href=\"https:\/\/publicknowledge.org\/due-process-and-our-approach-to-dominant-online-platforms\/\">due process in content moderation<\/a>; impact assessments that show how algorithmic systems perform against tests for bias; and enforcement of <a href=\"https:\/\/www.techdirt.com\/2020\/08\/17\/it-doesnt-make-sense-to-treat-ads-same-as-user-generated-content\/\">accountability for the platform\u2019s business model<\/a> (e.g., paid advertising).<\/p>\n\n\n\n<p>We also need to account for the rapidity of innovation in this sector. One solution that Public Knowledge has favored is an <a href=\"https:\/\/www.digitalplatformact.com\/\">expert and dedicated administrative agency<\/a> for digital platforms. A dedicated agency should have the authority to conduct oversight and auditing of AI and other algorithmic decision making products in order to protect consumers and promote civic discourse and democracy. But such an agency should also have broader authorities, including to enhance competition and empower the public to choose platforms and services whose policies align with their values. <a href=\"https:\/\/publicknowledge.org\/the-privacy-debate-reveals-how-big-techs-transparency-and-user-control-arguments-fall-flat\/\">Data privacy protections<\/a> are also relevant here, as they would disallow the customization and targeting of content that can make disinformation narratives so potent and so polarizing. But let\u2019s implement protections that cover<em> all <\/em>the data collection, exploitation, and surveillance uses we\u2019ve discussed for so many years.&nbsp;<\/p>\n\n\n\n<p><strong>The Best Time To Act<\/strong><\/p>\n\n\n\n<p>To paraphrase an old expression, the best time to act to protect the integrity of our information environment is, well, in 2016; but the second-best time is now. There\u2019s been a lot of <a href=\"https:\/\/publicknowledge.org\/ai-policy-and-the-uncanny-valley-freakout\/\">freaking out<\/a> about the heightened risks of disinformation due to generative AI as the United States and 49 other countries enter another election cycle for 2024. But generative AI is only one of the new threats in our information environment.&nbsp;<\/p>\n\n\n\n<p>Virtually all of the major platforms have rolled back disinformation policies and protections before the 2024 election cycle. A U.S. District Court judge recently issued a <a href=\"https:\/\/storage.courtlistener.com\/recap\/gov.uscourts.lawd.189520\/gov.uscourts.lawd.189520.293.0_1.pdf\">ruling<\/a> and <a href=\"https:\/\/storage.courtlistener.com\/recap\/gov.uscourts.lawd.189520\/gov.uscourts.lawd.189520.294.0_2.pdf\">preliminary injunction<\/a> limiting contact between Biden administration officials and social media platforms over certain online content, even some content relating to national security and public health and safety. There is a powerful new counter-narrative in Congress and the judicial system about the government\u2019s role in content moderation and an equation with censorship. Social media platforms, and media in general, seem to be <a href=\"https:\/\/www.semafor.com\/article\/07\/30\/2023\/the-fragmentation-election\">fragmenting<\/a>. This could be good or bad: Will the popularity of alternative, sometimes highly partisan, platforms send the conspiracy theorists back underground, made less dangerous because they are less able to find one another, connect, and communicate? Could more cohesive online communities with more in common increase the civility of these platforms? Or will the end of a few dominant digital gatekeepers mean even greater sequestering and polarization? And what happens if Twitter \u2013 or X \u2013&nbsp; does implode like the Titan submersible, and its wonky, highly influential user base of journalists, politicians and experts disbands and can\u2019t find one another to connect the dots on world events?&nbsp;<\/p>\n\n\n\n<p>It will take a <a href=\"https:\/\/www.ncbi.nlm.nih.gov\/books\/NBK572169\/pdf\/Bookshelf_NBK572169.pdf\">whole-of-society approach<\/a> to restore trust in our information environment, and we need to accelerate solutions that have already been proposed. We favor solutions that equip civil society to identify false information and allow all Americans to make informed choices about what information they share. We should enable research into how disinformation is seeded and spread and how to counteract it. Policymakers should create incentives for the technology platforms to change their policies and product design and they should foster more competition and choice among media outlets. Civil society should convene stakeholders, including from the communities most impacted by misinformation, to research and design solutions \u2013 all while protecting privacy and freedom of expression. And we should use <a href=\"https:\/\/publicknowledge.org\/policy\/go-local-combating-misinformation-through-media-policy-promoting-localism-and-diversity\/\">policy to solve the collapse of local news<\/a>, since it has opened information voids that disinformation rushes in to fill.<\/p>\n\n\n\n<p>Let\u2019s not waste a crisis, even if it\u2019s a false one. Let\u2019s focus the explosion of attention on generative AI and its threats to democracy into productive solutions to the challenges and harms of disinformation we\u2019ve been facing for years.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The recent explosion of generative AI brings many potential benefits to society, but along with these come just as many risks.<\/p>\n","protected":false},"author":189,"featured_media":36064,"parent":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[5],"tags":[11,14,29],"class_list":["post-36063","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-insights","tag-content-moderation","tag-platform-regulation","tag-trustworthy-ai"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v26.5 (Yoast SEO v26.5) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Lies, Damn Lies, and Generative Artificial Intelligence: How GAI Automates Disinformation and What We Should Do About It - Public Knowledge<\/title>\n<meta name=\"description\" content=\"The recent explosion of generative AI brings many potential benefits to society, but along with these come just as many risks.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Lies, Damn Lies, and Generative Artificial Intelligence: How GAI Automates Disinformation and What We Should Do About It\" \/>\n<meta property=\"og:description\" content=\"The recent explosion of generative AI brings many potential benefits to society, but along with these come just as many risks.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/\" \/>\n<meta property=\"og:site_name\" content=\"Public Knowledge\" \/>\n<meta property=\"article:published_time\" content=\"2023-08-07T17:25:04+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-01-16T16:12:35+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2023\/08\/Website-Pictures-1440x720.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1440\" \/>\n\t<meta property=\"og:image:height\" content=\"720\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Lisa Macpherson\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Lisa Macpherson\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"12 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/\"},\"author\":{\"name\":\"Lisa Macpherson\",\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/person\/757e28331d7a5e31a3290be1d16d219b\"},\"headline\":\"Lies, Damn Lies, and Generative Artificial Intelligence: How GAI Automates Disinformation and What We Should Do About It\",\"datePublished\":\"2023-08-07T17:25:04+00:00\",\"dateModified\":\"2025-01-16T16:12:35+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/\"},\"wordCount\":2697,\"publisher\":{\"@id\":\"https:\/\/publicknowledge.org\/#organization\"},\"image\":{\"@id\":\"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2023\/08\/Website-Pictures.png\",\"keywords\":[\"Content Moderation\",\"Platform Regulation\",\"Trustworthy AI\"],\"articleSection\":[\"Insights\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/\",\"url\":\"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/\",\"name\":\"Lies, Damn Lies, and Generative Artificial Intelligence: How GAI Automates Disinformation and What We Should Do About It - Public Knowledge\",\"isPartOf\":{\"@id\":\"https:\/\/publicknowledge.org\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2023\/08\/Website-Pictures.png\",\"datePublished\":\"2023-08-07T17:25:04+00:00\",\"dateModified\":\"2025-01-16T16:12:35+00:00\",\"description\":\"The recent explosion of generative AI brings many potential benefits to society, but along with these come just as many risks.\",\"breadcrumb\":{\"@id\":\"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/#primaryimage\",\"url\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2023\/08\/Website-Pictures.png\",\"contentUrl\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2023\/08\/Website-Pictures.png\",\"width\":2000,\"height\":1000},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/publicknowledge.org\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Lies, Damn Lies, and Generative Artificial Intelligence: How GAI Automates Disinformation and What We Should Do About It\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/publicknowledge.org\/#website\",\"url\":\"https:\/\/publicknowledge.org\/\",\"name\":\"Public Knowledge\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/publicknowledge.org\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/publicknowledge.org\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/publicknowledge.org\/#organization\",\"name\":\"Public Knowledge\",\"url\":\"https:\/\/publicknowledge.org\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png\",\"contentUrl\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png\",\"width\":400,\"height\":200,\"caption\":\"Public Knowledge\"},\"image\":{\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/person\/757e28331d7a5e31a3290be1d16d219b\",\"name\":\"Lisa Macpherson\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/a7ccfea9aef75381949570e9237ff7f0ef0efcd0f80308496086b5b8f6a2989e?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/a7ccfea9aef75381949570e9237ff7f0ef0efcd0f80308496086b5b8f6a2989e?s=96&d=mm&r=g\",\"caption\":\"Lisa Macpherson\"},\"url\":\"https:\/\/publicknowledge.org\/author\/lisa-macpherson\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Lies, Damn Lies, and Generative Artificial Intelligence: How GAI Automates Disinformation and What We Should Do About It - Public Knowledge","description":"The recent explosion of generative AI brings many potential benefits to society, but along with these come just as many risks.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/","og_locale":"en_US","og_type":"article","og_title":"Lies, Damn Lies, and Generative Artificial Intelligence: How GAI Automates Disinformation and What We Should Do About It","og_description":"The recent explosion of generative AI brings many potential benefits to society, but along with these come just as many risks.","og_url":"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/","og_site_name":"Public Knowledge","article_published_time":"2023-08-07T17:25:04+00:00","article_modified_time":"2025-01-16T16:12:35+00:00","og_image":[{"width":1440,"height":720,"url":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2023\/08\/Website-Pictures-1440x720.png","type":"image\/png"}],"author":"Lisa Macpherson","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Lisa Macpherson","Est. reading time":"12 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/#article","isPartOf":{"@id":"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/"},"author":{"name":"Lisa Macpherson","@id":"https:\/\/publicknowledge.org\/#\/schema\/person\/757e28331d7a5e31a3290be1d16d219b"},"headline":"Lies, Damn Lies, and Generative Artificial Intelligence: How GAI Automates Disinformation and What We Should Do About It","datePublished":"2023-08-07T17:25:04+00:00","dateModified":"2025-01-16T16:12:35+00:00","mainEntityOfPage":{"@id":"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/"},"wordCount":2697,"publisher":{"@id":"https:\/\/publicknowledge.org\/#organization"},"image":{"@id":"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/#primaryimage"},"thumbnailUrl":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2023\/08\/Website-Pictures.png","keywords":["Content Moderation","Platform Regulation","Trustworthy AI"],"articleSection":["Insights"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/","url":"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/","name":"Lies, Damn Lies, and Generative Artificial Intelligence: How GAI Automates Disinformation and What We Should Do About It - Public Knowledge","isPartOf":{"@id":"https:\/\/publicknowledge.org\/#website"},"primaryImageOfPage":{"@id":"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/#primaryimage"},"image":{"@id":"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/#primaryimage"},"thumbnailUrl":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2023\/08\/Website-Pictures.png","datePublished":"2023-08-07T17:25:04+00:00","dateModified":"2025-01-16T16:12:35+00:00","description":"The recent explosion of generative AI brings many potential benefits to society, but along with these come just as many risks.","breadcrumb":{"@id":"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/#primaryimage","url":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2023\/08\/Website-Pictures.png","contentUrl":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2023\/08\/Website-Pictures.png","width":2000,"height":1000},{"@type":"BreadcrumbList","@id":"https:\/\/publicknowledge.org\/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/publicknowledge.org\/"},{"@type":"ListItem","position":2,"name":"Lies, Damn Lies, and Generative Artificial Intelligence: How GAI Automates Disinformation and What We Should Do About It"}]},{"@type":"WebSite","@id":"https:\/\/publicknowledge.org\/#website","url":"https:\/\/publicknowledge.org\/","name":"Public Knowledge","description":"","publisher":{"@id":"https:\/\/publicknowledge.org\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/publicknowledge.org\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/publicknowledge.org\/#organization","name":"Public Knowledge","url":"https:\/\/publicknowledge.org\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/publicknowledge.org\/#\/schema\/logo\/image\/","url":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png","contentUrl":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png","width":400,"height":200,"caption":"Public Knowledge"},"image":{"@id":"https:\/\/publicknowledge.org\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/publicknowledge.org\/#\/schema\/person\/757e28331d7a5e31a3290be1d16d219b","name":"Lisa Macpherson","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/publicknowledge.org\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/a7ccfea9aef75381949570e9237ff7f0ef0efcd0f80308496086b5b8f6a2989e?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a7ccfea9aef75381949570e9237ff7f0ef0efcd0f80308496086b5b8f6a2989e?s=96&d=mm&r=g","caption":"Lisa Macpherson"},"url":"https:\/\/publicknowledge.org\/author\/lisa-macpherson\/"}]}},"_links":{"self":[{"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/posts\/36063","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/users\/189"}],"replies":[{"embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/comments?post=36063"}],"version-history":[{"count":0,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/posts\/36063\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/media\/36064"}],"wp:attachment":[{"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/media?parent=36063"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/categories?post=36063"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/tags?post=36063"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}