{"id":38161,"date":"2025-07-22T21:48:26","date_gmt":"2025-07-22T21:48:26","guid":{"rendered":"https:\/\/publicknowledge.org\/?p=38161"},"modified":"2025-07-22T22:25:21","modified_gmt":"2025-07-22T22:25:21","slug":"hopes-and-fears-for-president-trumps-ai-action-plan","status":"publish","type":"post","link":"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/","title":{"rendered":"Hopes and Fears for President Trump\u2019s AI Action Plan"},"content":{"rendered":"\n<p>When Donald Trump returned to the White House <a href=\"https:\/\/www.commoncause.org\/articles\/big-tech-is-donating-millions-to-trumps-inauguration\/\">buoyed by the support of tech billionaires<\/a>, one of the first orders of business was <a href=\"https:\/\/www.whitehouse.gov\/presidential-actions\/2025\/01\/removing-barriers-to-american-leadership-in-artificial-intelligence\/\">rescinding<\/a> the Biden administration\u2019s executive orders on artificial intelligence. Tomorrow, President Trump intends to <a href=\"https:\/\/www.forbes.com\/sites\/paulocarvao\/2025\/07\/19\/unleashing-ai-trumps-vision-for-american-tech-dominance\/\">announce a new AI action plan<\/a>, likely to be accompanied by new executive orders of his own. This \u201c<a href=\"https:\/\/www.prnewswire.com\/news-releases\/president-donald-j-trump-to-deliver-keynote-address-at-winning-the-ai-race-summit-hosted-by-allin-podcast-and-hill--valley-forum-302505499.html\">Winning the AI Race\u201d action plan<\/a> will be influenced by more than 10,000 responses the White House Office of Science and Technology Policy (OSTP) received in response to its request for comments. Public Knowledge was <a href=\"https:\/\/publicknowledge.org\/policy\/2025-artificial-intelligence-action-plan-comments\/\">one of those commenters<\/a>, writing in full hope that the Trump administration would adopt policies in accord with its stated goals \u201cto promote human flourishing, economic competitiveness, and national security.\u201d<\/p>\n\n\n\n<p>However, a <a href=\"https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=5278764\">survey of policy proposals<\/a> from other commenters, the actions of the Republican Congress, last week\u2019s press reporting, and rhetoric from key Trump administration advisors gives us reason to fear that President Trump\u2019s plan will fail to meet those ambitions and instead simply cede control of AI\u2019s future to the private sector. <em>That would be a profound failure of leadership at a pivotal moment in the development of this critical technology.<\/em> In this post, we review what we still hope President Trump\u2019s action plan might contain, while describing what there is to fear in an action plan that sells out the American people.<\/p>\n\n\n\n<h2 class=\"heading-2 wp-block-heading\" id=\"h-public-knowledge-s-priorities-for-an-ai-action-plan\"><strong>Public Knowledge\u2019s Priorities for an AI Action Plan<\/strong><\/h2>\n\n\n\n<p>Ideally, we would hope to see the White House adopt the priorities that we have assessed to be the most important for delivering on an innovative and competitive AI ecosystem. In our comments, Public Knowledge encouraged the administration to focus on four priorities:<br><\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Protect the rights to read and learn that make AI training possible.<\/li>\n\n\n\n<li>Support open-source, collaborative research and development.<\/li>\n\n\n\n<li>Build and maintain public physical and digital AI infrastructure to prevent monopolization and private enclosure.<\/li>\n\n\n\n<li>Develop standards and sensible rules around AI explainability and transparency to ensure trust and adoption.<br><\/li>\n<\/ol>\n\n\n\n<p>You can read all about why these priorities are so critical in <a href=\"https:\/\/publicknowledge.org\/policy\/2025-artificial-intelligence-action-plan-comments\/\">our comments<\/a>, as well as our other writing about <a href=\"https:\/\/publicknowledge.org\/courts-agree-ai-training-ruled-as-fair-use-in-bartz-v-anthropic-and-kadrey-v-meta\/\">AI training<\/a>, the <a href=\"https:\/\/publicknowledge.org\/policy\/final-ntia-comments\/\">importance of open source<\/a>, the need for <a href=\"https:\/\/publicai.network\/whitepaper\/\">Public AI infrastructure<\/a>, and our <a href=\"https:\/\/publicknowledge.org\/policy\/ntia-ai-accountability\/\">approach to AI accountability<\/a>. We maintain that if America is truly interested in \u201cwinning the AI race,\u201d then we need an open, innovative, competitive, and dynamic AI ecosystem that users trust. Without a focus on these four priorities, we are looking at an AI sector dominated by Big Tech, infrastructure projects that line the pockets of crony capitalists, and opaque and unsafe AI systems that continue to attract suspicion and cause harm.<\/p>\n\n\n\n<p>Winning the AI race is only important if the American people are the winners. Winning the AI race should mean shared prosperity, AI that reflects our diverse and pluralistic free society, and technology that we can understand and trust. Racing ahead blindly, without thought to consequence or direction, is <em>not<\/em> the path to winning anything.<\/p>\n\n\n\n<h2 class=\"heading-2 wp-block-heading\" id=\"h-holding-out-hope\"><strong>Holding Out Hope<\/strong><\/h2>\n\n\n\n<p>Obviously, the Trump administration adopting our four priorities would be ideal, but there are a couple of narrower policy choices roughly in line with our priorities that could make it into the action plan. These hopes are based on policy priorities that attracted support from other key commenters and fit with the administration\u2019s innovation-focused agenda.<\/p>\n\n\n\n<h3 class=\"heading-3 wp-block-heading\" id=\"h-hope-prioritizing-ai-research-and-investment-in-scientific-institutions\"><strong><em>Hope: Prioritizing AI research and investment in scientific institutions<\/em><\/strong><\/h3>\n\n\n\n<p>The most obvious policy priority for winning the AI race would be a focus on supporting AI research and development. To ensure America\u2019s continued leadership and technical edge, the Trump administration should support, fund, and promote AI research. Unusually, AI advancements have emerged from private and nonprofit labs whereas many other important technologies originated through government research support. That means there is both a need and opportunity to activate our public universities, national labs, and other scientific institutions. At a moment when the Trump administration has been <a href=\"https:\/\/www.nytimes.com\/interactive\/2025\/05\/22\/upshot\/nsf-grants-trump-cuts.html\">defunding important scientific research<\/a> and <a href=\"https:\/\/www.npr.org\/2025\/06\/10\/nx-s1-5424450\/ways-trump-administration-is-going-after-colleges\">threatening the funding for colleges and universities<\/a>, this action plan should be a moment to reverse course and support science, innovation, and learning.<\/p>\n\n\n\n<p>Commenters across sectors agree that the federal government must do more to strengthen public research capacity and expand access to the resources that power AI innovation. Their recommendations differ in emphasis\u2014but share a core conviction: that sustained, well-targeted public investment in R&amp;D is essential for maintaining U.S. leadership, enabling breakthroughs in science and national security, and ensuring that AI development benefits the broader public.<\/p>\n\n\n\n<p>Google emphasizes that \u201clong-term, sustained investments in foundational domestic R&amp;D and AI-driven scientific discovery\u201d have historically given the U.S. a global advantage\u2014and that now is the time to \u201csignificantly bolster these efforts.\u201d <a href=\"https:\/\/static.googleusercontent.com\/media\/publicpolicy.google\/en\/\/resources\/response_us_ai_action_plan.pdf\">Google\u2019s comment calls<\/a> for faster allocation of funding for early-stage research and broader availability of \u201cessential compute, high-quality datasets, and advanced AI models\u201d to scientists and institutions. Lowering these barriers, Google argues, will allow the American research community to focus on innovation instead of resource acquisition. Google also encourages the government to invest in federal prize challenges for unsolved scientific problems, expand partnerships with national labs in key areas like cybersecurity and biosecurity, and make government datasets available for commercial training and experimentation.<\/p>\n\n\n\n<p>Encode similarly advocates for boosting public research institutions, especially through investment in AI for science. Encode points to the critical role that <a href=\"https:\/\/www.usa.gov\/agencies\/defense-advanced-research-projects-agency\">DARPA<\/a>, Stanford, and other federally supported institutions played in the development of foundational technologies\u2014from neural networks to the internet\u2014and warn that the current \u201clack of computational resources and access to critical data is stifling innovation\u201d at U.S. universities. <a href=\"https:\/\/cdn.sanity.io\/files\/3tzzh18d\/production\/423c721cf469e91b47ef1a844059b79919397358.pdf\">Encode\u2019s comment<\/a> calls for permanently establishing and funding the National AI Research Resource (NAIRR) to provide institutions with the compute, data, software, and training needed to advance. Encode envisions a \u201csuperhighway for science\u201d that connects universities, national labs, and industry partners into a coordinated ecosystem\u2014dramatically accelerating the timeline from research to real-world impact.<\/p>\n\n\n\n<p>Georgetown University\u2019s Center for Security and Emerging Technology (CSET) likewise recommends expanding federal AI R&amp;D across universities, national labs, federally funded R&amp;D centers (FFRDCs), and nonprofits. <a href=\"https:\/\/cset.georgetown.edu\/publication\/csets-recommendations-for-an-ai-action-plan\/\">Georgetown\u2019s comment<\/a> underscores the importance of investment in both technical and non-technical research, as well as the infrastructure and data required to support AI for science, especially in strategic sectors like biotechnology.<\/p>\n\n\n\n<p>The Federation of American Scientists (FAS) emphasizes that while American AI leadership has benefited from private investment, \u201ccritical high-impact areas remain underfunded.\u201d <a href=\"https:\/\/fas.org\/publication\/rfi-development-of-artificial-intelligence-ai-action-plan\/\">The FAS proposes<\/a> a federal agenda focused on expanding access to data, funding overlooked areas of research, and defining national priority challenges. In particular, they support scaling the NAIRR from pilot to full program, and integrating its resources with other proven government initiatives such as the NIST AI Safety Institute, the AI Use Case Inventory, and the Department of Energy\u2019s Office of Critical and Emerging Technologies (CET). The FAS also calls for the creation of a dedicated AI and Computing Laboratory at DOE, modeled after ARPA-E, to enable rapid procurement, hiring, and academic-industry partnerships.<\/p>\n\n\n\n<p>Together, these proposals offer a clear and coherent roadmap. An action plan that prioritizes AI R&amp;D, expands public access to compute and data, and invests in the public institutions that make scientific progress possible would not only serve national competitiveness\u2014but would also embody a public-interest vision of innovation.<\/p>\n\n\n\n<h3 class=\"heading-3 wp-block-heading\" id=\"h-hope-embracing-open-source-especially-in-government-procurement\"><strong><em>Hope: Embracing open source, especially in government procurement<\/em><\/strong><\/h3>\n\n\n\n<p>A second priority that we hope might appear is a focus on open source AI. The Trump administration has begun to loosen certain export controls related to AI, and appears to be driven by concerns about global competition and the adoption of American AI abroad. While we believe that building trust through leadership on accountability would go a long way promoting adoption both at home and abroad, another key move will be an embrace of open source AI.<\/p>\n\n\n\n<p>Open source AI offers considerable benefits for security, transparency, evaluation, and accessibility. Promoting the development of a robust open source AI ecosystem, like the robust open source software ecosystem that is ubiquitous and foundational today, would significantly advance the pace of innovation and bolster U.S. competitiveness.<br><br>The Trump action plan could embrace open source AI as a preference in government procurement and use. This would leverage the size and significance of the federal government to encourage broader open source development. In addition, it would allow the government to harness the benefits of open source for itself, including cost-savings due to the underlying model being free; better transparency; and greater customization and secure processing. There is some reason to hope this message reaches the administration: This idea was embraced by civil society organizations like the <a href=\"https:\/\/www.eff.org\/files\/2025\/03\/13\/2024.03.13_electronic_frontier_foundation_comments_ai_action_plan.pdf\">Electronic Frontier Foundation<\/a> and the <a href=\"https:\/\/opensource.org\/blog\/osi-and-apereo-foundation-respond-to-white-house-on-ai-action-plan\">Open Source Initiative<\/a>, academic experts like those at Georgetown\u2019s CSET, and even by AI companies and venture capital firms like <a href=\"https:\/\/d1lamhf6l6yk6d.cloudfront.net\/uploads\/2025\/03\/a16z-National-AI-Action-Plan-OSTP-Submission.pdf\">Andreessen Horowitz<\/a>.&nbsp;<\/p>\n\n\n\n<p>Meta, which produces Llama (one of the most widely used open weights models), <a href=\"https:\/\/files.nitrd.gov\/90-fr-9088\/Meta-AI-RFI-2025.pdf\">wrote in detail<\/a> about how the government ought to prefer open source for government use to strengthen U.S. security while reducing costs and saving taxpayer dollars, forgo restrictions on open source release in order to compete with China\u2019s open source models like DeepSeek, and drive innovation and economic prosperity.<br><br>Even OpenAI, which has often failed to fulfill the expectations its name would imply with its closed models and secretive data practices, <a href=\"https:\/\/cdn.openai.com\/global-affairs\/ostp-rfi\/ec680b75-d539-4653-b297-8bcf6e5f7686\/openai-response-ostp-nsf-rfi-notice-request-for-information-on-the-development-of-an-artificial-intelligence-ai-action-plan.pdf\">wrote<\/a> that the U.S. should develop a policy to ensure we are exporting open-sourced models to ensure American leadership in AI around the world\u2014and it has <a href=\"https:\/\/www.wired.com\/story\/openai-sam-altman-announce-open-source-model\/\">recently announced<\/a> that it intends to release an open weights model of its own.<br><br>An action plan that embraces open source AI\u2014especially through federal procurement\u2014would align with both the administration\u2019s competitive agenda and the public interest. By backing open source as a strategic asset, the Trump administration could lower barriers to entry, enhance national security, and empower a broader set of innovators to contribute to the future of AI. It would also send a clear signal: that American leadership in AI is not defined by corporate secrecy or closed systems, but by openness, collaboration, and the freedom to build.<\/p>\n\n\n\n<p><strong>Well-founded Fears<\/strong><\/p>\n\n\n\n<p>While there are reasons to hope for sensible, pro-innovation policies in President Trump\u2019s AI action plan, there are equally strong\u2014and perhaps better founded\u2014reasons to worry.&nbsp;<\/p>\n\n\n\n<p>In particular, the Trump administration\u2019s ideological agenda threatens to undermine democratic principles under the guise of neutrality and \u201canti-woke\u201d rhetoric, create legal loopholes for AI companies to avoid regulation, and sell out the American people on AI infrastructure\u2014exactly when we should be investing in the future on the ground floor.&nbsp;<\/p>\n\n\n\n<h3 class=\"heading-3 wp-block-heading\" id=\"h-fear-a-continued-un-american-attack-on-diversity-and-equity-under-the-guise-of-neutrality\"><strong><em>Fear: A continued un-American attack on diversity and equity under the guise of neutrality<\/em><\/strong><\/h3>\n\n\n\n<p>Recent press reports suggest the White House is preparing an executive order targeting so-called \u201cwoke\u201d AI models as part of its action plan. According to <a href=\"https:\/\/www.wsj.com\/tech\/ai\/white-house-prepares-executive-order-targeting-woke-ai-e68e8e24\">The Wall Street Journal<\/a>, \u201cThe order would dictate that AI companies getting federal contracts be politically neutral and unbiased, an effort to combat what administration officials see as overly liberal AI models.\u201d But the Trump administration and its allies do not have a record of neutrality or even-handedness. Despite claiming to champion free expression, their policies have repeatedly demonstrated censorship, bias, and discrimination.<\/p>\n\n\n\n<p>We should dispense with the notion, up front, that the government can or should aim for political neutrality in its use of AI. The government not only <em>can<\/em> have a viewpoint, in a democracy, it <em>must<\/em>. A functioning democracy is about building institutions that reflect our shared values\u2014not pretending that neutrality is required when fundamental rights and freedoms are at stake.<br><\/p>\n\n\n\n<p>There is a mistaken claim that the Biden administration acted inappropriately by embracing America\u2019s strength as a diverse nation with a commitment to justice and equality. President Trump repealed former President Biden\u2019s Executive Order on \u201c<a href=\"https:\/\/bidenwhitehouse.archives.gov\/briefing-room\/presidential-actions\/2023\/10\/30\/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence\/\">Safe, Secure, and Trustworthy Development and Use of AI<\/a>\u201d that simply required federal agencies to ensure that AI systems adopted by the government did not promote bias or discrimination in certain high-risk sectors like housing and healthcare, and respected existing civil rights laws. But dismantling these policies that promote equity is not a return to neutrality\u2014it is the imposition of a new and narrower ideology.<\/p>\n\n\n\n<p>There is no such thing as a value-free or neutral AI system. All technology reflects choices, priorities, and trade-offs. Design encodes values\u2014just as law and policy do. That\u2019s precisely why leadership in AI matters: We want AI that encodes democratic values, <em>not<\/em> authoritarian ones. If America hopes to outpace its rivals, especially China, it must do so by building systems that reflect openness, pluralism, and human dignity.<\/p>\n\n\n\n<p>The real concern is that the Trump administration will not stop at repealing Biden-era equity policies\u2014it will replace them with new ideological mandates cloaked in the language of \u201cneutrality\u201d and \u201cfairness.\u201d But we have already seen what this language has been used to justify.<\/p>\n\n\n\n<p>When marking up the \u201cFuture of AI Innovation\u201d Act last Congress, Senator Ted Cruz (R &#8211; Texas) <a href=\"https:\/\/www.commerce.senate.gov\/services\/files\/95D950E2-2321-469D-9031-DD5C32EC4296\">amended the bill<\/a> with a so-called anti-woke amendment that prohibited certain policies. Some of the prohibitions at the top seemed to make sense (e.g., cannot promote that one race or sex is inherently superior to another) but further down the list the real hand was shown: It <em>explicitly prohibited<\/em> policies that AI \u201cshould be designed in an equitable way that prevents disparate impacts based on a protected class or other societal classification.\u201d It <em>explicitly prohibited<\/em> policies aimed at preventing disparate impacts due to bias in training data, and <em>explicitly prohibited<\/em> even doing impact assessments or promoting use of technology or techniques to \u201censure inclusivity and equity in the creation, design, or development of the technology.\u201d This did not pass into law, but it is shocking how blatant and explicit the agenda is here. This is not a desire for neutrality, or a staid disagreement about the values of meritocracy: It is a reactionary attack on the hard-won principles of justice and equality that have long animated America\u2019s best aspirations.&nbsp;<\/p>\n\n\n\n<p><br>The Trump administration itself has been aggressively dismantling diversity, equity, and inclusion programs since <a href=\"https:\/\/www.whitehouse.gov\/presidential-actions\/2025\/01\/ending-radical-and-wasteful-government-dei-programs-and-preferencing\/\">the day President Trump took office<\/a>. But again, this is not an effort to simply restore \u201cneutrality\u201d or change employment rules, as President Trump has repeatedly embarked on a campaign of <a href=\"https:\/\/www.pbs.org\/newshour\/show\/pentagon-history-purge-highlights-which-stories-are-told-and-why-others-are-ignored\">removing women and racial minorities from government websites<\/a>, including heroic veterans like the Navajo Code Talkers. He has also directly attacked the identities of transgender and nonbinary people with <a href=\"https:\/\/www.whitehouse.gov\/presidential-actions\/2025\/01\/defending-women-from-gender-ideology-extremism-and-restoring-biological-truth-to-the-federal-government\/\">an Executive Order<\/a> that blatantly and unconstitutionally <a href=\"https:\/\/www.aclu.org\/news\/lgbtq-rights\/trumps-executive-orders-promoting-sex-discrimination-explained\">promotes discrimination on the basis of sex<\/a>. That Order led to a Federal Trade Commission workshop earlier this month that <a href=\"https:\/\/www.lgbttech.org\/post\/organizations-counter-ftc-s-anti-transgender-workshop-with-a-workshop-focused-on-medical-truth-and-c\">misused Commission authority<\/a> to undermine and delegitimize well-established medical practices around gender-affirming care, under the guise of consumer protection. This pattern makes it clear that supposedly neutral sounding efforts are, in fact, a smokescreen for discrimination against vulnerable minority communities. These are not the \u201cAmerican values\u201d we want encoded into AI.<br><br>The biggest movie in the nation <a href=\"https:\/\/the-numbers.com\/movie\/Superman-(2025)#tab=box-office\">right now<\/a> is \u201cSuperman,\u201d and I think we can turn to him for some guidance:<\/p>\n\n\n\n<p>In a 1950 anti-bigotry PSA, Superman told schoolchildren that \u201cour country is made up of Americans of many different races, religions, and national origins,\u201d and that when you hear someone speak against a classmate because of who they are, \u201cthat kind of talk is un-American.\u201d That message was simple, clear, and patriotic. It still is. Rebranding discrimination as \u201cneutrality\u201d doesn\u2019t make it any less discriminatory. If we want AI that reflects American values, we should build systems rooted in fairness, equality, and the belief that diversity is a strength. Anything less is not just bad technology\u2014it\u2019s un-American.<br><img decoding=\"async\" src=\"https:\/\/lh7-rt.googleusercontent.com\/docsz\/AD_4nXd5lxvPf74qEqEDp5j8V15jqADA08-pVq9XdJyy6_KDMt_LkLspaxlUz9WXWI0w2WUGkL9dsQlw2-6N8qpwBL35zYuN41wjB52kNSnaxxB8GJS1h_pDpI8eEAzfMqYQhpmlx3kHaA?key=20o7xEwPiqT1pDO1Cqz0AQ\" style=\"\" alt=\"Image from a Superman Comic, where Superman addresses a small crowd of teenagers saying, &quot;...and remember, boys and girls, your school \u2014 like our country \u2014 is made up of Americans of *many* different races, religions, and national origins, so... if you hear anybody talk against a schoolmate or anyone else because of his religion, race, or national origin\u2014don't wait: tell him THAT KIND OF TALK IS UN-AMERICAN. Help keep your school All-American!&quot;\"><\/p>\n\n\n\n<h3 class=\"heading-3 wp-block-heading\" id=\"h-fear-creating-legal-loopholes-for-ai-that-put-the-public-at-risk\"><strong><em>Fear: Creating legal loopholes for AI that put the public at risk<\/em><\/strong><\/h3>\n\n\n\n<p>Another significant fear is that the Trump administration\u2019s AI action plan will create broad legal loopholes for AI companies\u2014failing to enforce existing laws and preempting new ones\u2014thereby undermining the legal frameworks that protect the public. The Trump administration\u2019s deregulatory posture, combined with recent industry lobbying, raises concern that the action plan will seek to preempt state laws, emphasize voluntary industry standards, and develop so-called &#8220;<a href=\"https:\/\/techfreedom.org\/wp-content\/uploads\/2025\/03\/TF-Public-Comment-on-AI-Action-Plan.pdf\">regulatory sandboxes<\/a>&#8220;\u2014carveouts that exempt AI companies from existing consumer protection, civil rights, and safety laws under the pretense of fostering innovation. These light-touch approaches to regulation may each have a place in fostering the growth of an emerging technology, but they each need to be paired with stronger, enforceable rules in different ways.<\/p>\n\n\n\n<p>The federal government should not prevent states from stepping in where Congress has fallen short. Recently, Public Knowledge strongly opposed a proposed 10-year moratorium on state AI regulation\u2014a ban so sweeping and extreme that it was eventually <a href=\"https:\/\/time.com\/7299044\/senators-reject-10-year-ban-on-state-level-ai-regulation-in-blow-to-big-tech\/\">defeated 99-1 in the U.S. Senate<\/a>. Yet the sentiment behind it lingers. Some industry-aligned proposals continue to push for broad preemption of state AI laws without offering any meaningful federal safeguards to replace them. That kind of preemption\u2014where the federal government takes away states\u2019 power to regulate but refuses to do the job itself\u2014is a recipe for disaster. Yes, a confusing patchwork of state rules might eventually become burdensome, but it would be a mistake to jump to broad preemption without a uniform regulatory regime to replace it.<br><br>In the meantime, states have long served as laboratories of democracy, often leading the way on issues like consumer protection, environmental standards, and digital rights. If the federal government wants to preempt states with a national standard, it should do so\u2014but only if those standards are real, enforceable, and smarter than what states are already doing.&nbsp;<\/p>\n\n\n\n<p>Similarly, developing voluntary industry standards for AI systems through broadly-inclusive stakeholder processes may be a good and necessary step, but it cannot be the endpoint. Without meaningful enforcement, standards are suggestions, and AI is too important to let companies simply regulate themselves. In the absence of real rules, we risk replacing an outdated regulatory system with an empty one\u2014one that looks slick and innovative on paper but does nothing to safeguard civil rights, consumer protection, or national security in practice.<\/p>\n\n\n\n<p>Finally, we already have laws on the books like from the Fair Housing Act, state-level privacy laws, and other sector-specific regulation that can be applied to AI. These laws should be enforced. But there is a danger that AI companies will be allowed to operate in a gray zone, shielded from liability simply because their systems are new or difficult to interpret. That would not be innovation, it would be evasion. To be clear, we do not oppose experimentation, sandboxes, or flexible regulatory tools when used appropriately. But creating loopholes that let powerful firms skirt the law is not sound policy\u2014accountability and liability should rest with the firms that have the resources and ability to best protect the public from downstream harms.<\/p>\n\n\n\n<p>To really drive innovation forward requires trust. The American people need AI systems that are safe, fair, and subject to the rule of law.<\/p>\n\n\n\n<h3 class=\"heading-3 wp-block-heading\" id=\"h-fear-selling-out-america-on-ai-infrastructure\"><strong><em>Fear: Selling out America on AI infrastructure<\/em><\/strong><\/h3>\n\n\n\n<p>Last, but not least, there is real reason to fear that the Trump administration is simply going to get suckered into making bad deals that sell out this moment of opportunity. <a href=\"https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=5278764\">Analysis of the comments<\/a> submitted to the OSTP indicates that building AI infrastructure\u2014from data centers to power generation capacity to electrical transmission lines\u2014emerges as a \u201cnear universal concept amongst Big Tech firms.\u201d Given the administration\u2019s recent cheerleading for billions of dollars invested in Pennsylvania, it seems likely that there will be significant focus on infrastructure in the AI action plan. Yet if the Republican budget bill and its devastation of green energy and handouts to oil and gas companies is any indication, then there is reason to fear that President Trump will let the opportunity to affirmatively invest in smart, sustainable, critical public infrastructure slip through his fingers.<\/p>\n\n\n\n<p>Public Knowledge has been outspoken about our support for Public AI infrastructure, including in our comments to the OSTP under both <a href=\"https:\/\/publicknowledge.org\/policy\/ostp-artificial-intelligence\/\">former President Biden<\/a> and <a href=\"https:\/\/publicknowledge.org\/policy\/2025-artificial-intelligence-action-plan-comments\/\">President Trump<\/a>. And in this proceeding, plenty of other commenters joined in as well, for example: The <a href=\"https:\/\/static1.squarespace.com\/static\/5e449c8c3ef68d752f3e70dc\/t\/67d87bb05bd77b101199574b\/1742240688171\/OSTP+RFI+AI+Action+Plan+-+OMI+Submission.pdf\">Open Markets Institute wrote<\/a> about the critical competitive advantages public utility regulation and public infrastructure would provide; <a href=\"https:\/\/blog.mozilla.org\/netpolicy\/files\/2025\/03\/AI-Action-Plan-RFI_Mozilla-Submission.pdf\">Mozilla wrote<\/a> in explicit support of Public AI infrastructure like the NAIRR and the Department of Energy\u2019s Frontiers in Artificial Intelligence for Science, Security and Technology (FASST) initiative; Encode wrote in support of NAIRR; and a <a href=\"https:\/\/ash.harvard.edu\/wp-content\/uploads\/2025\/05\/GETTING-Plurality-RFI-Response-AI-Action-Plan.pdf\">network of academics highlighted<\/a> the <a href=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/412\/2024\/09\/27201409\/VPA-Paper-National-Security-Case-for-AI.pdf\">national security case<\/a> for publicly owned AI tech stack components.&nbsp;<\/p>\n\n\n\n<p>Despite all this support for using public dollars to invest in public infrastructure, President Trump\u2019s AI action plan could wind up instead focusing on infrastructure strategies that only cater to the private sector; even going so far as to give away federal dollars or resources without any return for the American people. That would be a monumentally bad deal.<\/p>\n\n\n\n<p>When it comes to energy infrastructure, there seems to be another bad deal brewing: It seems like the Trump administration plans on boosting dirty energy projects to the exclusion of green. This is an environmental and climate change issue to be sure\u2014AI data centers have significant energy needs and civil society commenters have warned the OSTP about those dangers. But this is not just about sustainability: If you share the belief that AI success requires massive demand for power, then we want to use everything at our disposal to get there! We need massive investments in wind, solar, geothermal, and nuclear power to prepare for a high-tech future. This should not be a partisan issue: Texas\u2014a traditional bastion of the oil and gas industry\u2014has brought online more solar power than any other state. China is seizing more and more of the solar energy market, and preparing for AI by investing in American dominance across the energy sectors of the future is the best move our nation could make. Yet, instead it may be the case that President Trump\u2019s action plan sells out the possibility of building critical energy infrastructure to appease cronies in the oil and gas industries.<\/p>\n\n\n\n<h2 class=\"heading-2 wp-block-heading\" id=\"h-a-people-powered-plan\"><strong>A People-powered Plan<\/strong><\/h2>\n\n\n\n<p>At Public Knowledge, we believe that technology policy must begin and end with the public interest. Last year, in our post \u201c<a href=\"https:\/\/publicknowledge.org\/putting-the-public-first-the-road-to-accountable-ai\/\">Putting the Public First<\/a>\u201d in response to the Senate\u2019s AI legislative roadmap, we laid out a vision rooted in openness, accountability, civil rights, and democratic governance. These are not just abstract principles, but concrete tools to ensure AI systems work for everyone. We don\u2019t believe the government should simply cheer on \u201cinnovation\u201d without asking: innovation for <em>whom<\/em>?&nbsp;<\/p>\n\n\n\n<p>If the Trump administration\u2019s \u201cWinning the AI Race\u201d plan turns out to be what we fear, then we will need strong alternatives rooted in these principles that speak for the people, not just the powerful. That\u2019s why we have also joined with a broad coalition of over 90 tech, economic justice, consumer protection, labor, environmental justice, and civil society organizations in launching the <a href=\"https:\/\/peoplesaiaction.com\/\">People\u2019s AI Action Plan<\/a>\u2014a proactive effort to offer a vision for AI that delivers first and foremost for the American people. We are proud to add our voice to this effort, bringing our expertise and adding our vision for a creative and connected future for everyone.<\/p>\n\n\n\n<p>No matter what tomorrow holds, our commitment is clear: We will continue to work with civil society allies, public interest technologists, researchers, industry, and community leaders to advance smart policies that match the vast potential of innovation with the guiding values of the public good. If we stick to that plan, we will build a future where the winners of the AI race are the people.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A look at what President Trump\u2019s AI action plan could still get right\u2014and what we can\u2019t afford to get wrong.<\/p>\n","protected":false},"author":205,"featured_media":38163,"parent":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[5],"tags":[14,29],"class_list":["post-38161","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-insights","tag-platform-regulation","tag-trustworthy-ai"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v26.5 (Yoast SEO v26.5) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Hopes and Fears for President Trump\u2019s AI Action Plan - Public Knowledge<\/title>\n<meta name=\"description\" content=\"A look at what Trump\u2019s AI plan could still get right\u2014and what we can\u2019t afford to get wrong.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Hopes and Fears for President Trump\u2019s AI Action Plan\" \/>\n<meta property=\"og:description\" content=\"A look at what Trump\u2019s AI plan could still get right\u2014and what we can\u2019t afford to get wrong.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/\" \/>\n<meta property=\"og:site_name\" content=\"Public Knowledge\" \/>\n<meta property=\"article:published_time\" content=\"2025-07-22T21:48:26+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-07-22T22:25:21+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/07\/white-house-1440x720.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1440\" \/>\n\t<meta property=\"og:image:height\" content=\"720\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Nicholas Garcia\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Nicholas Garcia\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"18 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/\"},\"author\":{\"name\":\"Nicholas Garcia\",\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/person\/73c0c1e501582b35b62e3973bc8e3692\"},\"headline\":\"Hopes and Fears for President Trump\u2019s AI Action Plan\",\"datePublished\":\"2025-07-22T21:48:26+00:00\",\"dateModified\":\"2025-07-22T22:25:21+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/\"},\"wordCount\":3949,\"publisher\":{\"@id\":\"https:\/\/publicknowledge.org\/#organization\"},\"image\":{\"@id\":\"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/07\/white-house.png\",\"keywords\":[\"Platform Regulation\",\"Trustworthy AI\"],\"articleSection\":[\"Insights\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/\",\"url\":\"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/\",\"name\":\"Hopes and Fears for President Trump\u2019s AI Action Plan - Public Knowledge\",\"isPartOf\":{\"@id\":\"https:\/\/publicknowledge.org\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/07\/white-house.png\",\"datePublished\":\"2025-07-22T21:48:26+00:00\",\"dateModified\":\"2025-07-22T22:25:21+00:00\",\"description\":\"A look at what Trump\u2019s AI plan could still get right\u2014and what we can\u2019t afford to get wrong.\",\"breadcrumb\":{\"@id\":\"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/#primaryimage\",\"url\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/07\/white-house.png\",\"contentUrl\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/07\/white-house.png\",\"width\":2000,\"height\":1000},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/publicknowledge.org\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Hopes and Fears for President Trump\u2019s AI Action Plan\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/publicknowledge.org\/#website\",\"url\":\"https:\/\/publicknowledge.org\/\",\"name\":\"Public Knowledge\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/publicknowledge.org\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/publicknowledge.org\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/publicknowledge.org\/#organization\",\"name\":\"Public Knowledge\",\"url\":\"https:\/\/publicknowledge.org\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png\",\"contentUrl\":\"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png\",\"width\":400,\"height\":200,\"caption\":\"Public Knowledge\"},\"image\":{\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/person\/73c0c1e501582b35b62e3973bc8e3692\",\"name\":\"Nicholas Garcia\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/publicknowledge.org\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/907f7af37f4f165d906555c234c4afce8072f6d46024cb521cb26f2bfbb00723?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/907f7af37f4f165d906555c234c4afce8072f6d46024cb521cb26f2bfbb00723?s=96&d=mm&r=g\",\"caption\":\"Nicholas Garcia\"},\"url\":\"https:\/\/publicknowledge.org\/author\/nick-garcia\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Hopes and Fears for President Trump\u2019s AI Action Plan - Public Knowledge","description":"A look at what Trump\u2019s AI plan could still get right\u2014and what we can\u2019t afford to get wrong.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/","og_locale":"en_US","og_type":"article","og_title":"Hopes and Fears for President Trump\u2019s AI Action Plan","og_description":"A look at what Trump\u2019s AI plan could still get right\u2014and what we can\u2019t afford to get wrong.","og_url":"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/","og_site_name":"Public Knowledge","article_published_time":"2025-07-22T21:48:26+00:00","article_modified_time":"2025-07-22T22:25:21+00:00","og_image":[{"width":1440,"height":720,"url":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/07\/white-house-1440x720.png","type":"image\/png"}],"author":"Nicholas Garcia","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Nicholas Garcia","Est. reading time":"18 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/#article","isPartOf":{"@id":"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/"},"author":{"name":"Nicholas Garcia","@id":"https:\/\/publicknowledge.org\/#\/schema\/person\/73c0c1e501582b35b62e3973bc8e3692"},"headline":"Hopes and Fears for President Trump\u2019s AI Action Plan","datePublished":"2025-07-22T21:48:26+00:00","dateModified":"2025-07-22T22:25:21+00:00","mainEntityOfPage":{"@id":"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/"},"wordCount":3949,"publisher":{"@id":"https:\/\/publicknowledge.org\/#organization"},"image":{"@id":"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/#primaryimage"},"thumbnailUrl":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/07\/white-house.png","keywords":["Platform Regulation","Trustworthy AI"],"articleSection":["Insights"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/","url":"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/","name":"Hopes and Fears for President Trump\u2019s AI Action Plan - Public Knowledge","isPartOf":{"@id":"https:\/\/publicknowledge.org\/#website"},"primaryImageOfPage":{"@id":"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/#primaryimage"},"image":{"@id":"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/#primaryimage"},"thumbnailUrl":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/07\/white-house.png","datePublished":"2025-07-22T21:48:26+00:00","dateModified":"2025-07-22T22:25:21+00:00","description":"A look at what Trump\u2019s AI plan could still get right\u2014and what we can\u2019t afford to get wrong.","breadcrumb":{"@id":"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/#primaryimage","url":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/07\/white-house.png","contentUrl":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2025\/07\/white-house.png","width":2000,"height":1000},{"@type":"BreadcrumbList","@id":"https:\/\/publicknowledge.org\/hopes-and-fears-for-president-trumps-ai-action-plan\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/publicknowledge.org\/"},{"@type":"ListItem","position":2,"name":"Hopes and Fears for President Trump\u2019s AI Action Plan"}]},{"@type":"WebSite","@id":"https:\/\/publicknowledge.org\/#website","url":"https:\/\/publicknowledge.org\/","name":"Public Knowledge","description":"","publisher":{"@id":"https:\/\/publicknowledge.org\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/publicknowledge.org\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/publicknowledge.org\/#organization","name":"Public Knowledge","url":"https:\/\/publicknowledge.org\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/publicknowledge.org\/#\/schema\/logo\/image\/","url":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png","contentUrl":"https:\/\/publicknowledge.org\/wp-content\/uploads\/2021\/12\/pk_social_logo-2.png","width":400,"height":200,"caption":"Public Knowledge"},"image":{"@id":"https:\/\/publicknowledge.org\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/publicknowledge.org\/#\/schema\/person\/73c0c1e501582b35b62e3973bc8e3692","name":"Nicholas Garcia","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/publicknowledge.org\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/907f7af37f4f165d906555c234c4afce8072f6d46024cb521cb26f2bfbb00723?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/907f7af37f4f165d906555c234c4afce8072f6d46024cb521cb26f2bfbb00723?s=96&d=mm&r=g","caption":"Nicholas Garcia"},"url":"https:\/\/publicknowledge.org\/author\/nick-garcia\/"}]}},"_links":{"self":[{"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/posts\/38161","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/users\/205"}],"replies":[{"embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/comments?post=38161"}],"version-history":[{"count":0,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/posts\/38161\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/media\/38163"}],"wp:attachment":[{"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/media?parent=38161"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/categories?post=38161"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/publicknowledge.org\/wp-json\/wp\/v2\/tags?post=38161"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}